I built a small routing runtime (IntentusNet v1.0.0) that is intentionally limited: - synchronous execution - explicit routing strategies (direct, fallback, broadcast, parallel) - deterministic agent ordering - no retries, no workflows, no scheduling
It’s useful only when multiple handlers can satisfy the same intent and the fallback path needs to be predictable.
Code and documentation: https://github.com/Balchandar/intentusnet
I’d appreciate feedback on where this approach breaks down or isn’t worth the trade-off.
With deterministic routing, you can replay executions, isolate failures, and distinguish between structural bugs and model behavior. Without it, everything collapses into “the agent acted weird.”
In practice, determinism shifts agents from experimental systems to debuggable ones. It doesn’t remove intelligence — it makes responsibility visible.
Once routing and fallback are deterministic, failures stop being mysterious. You can replay them, inspect the path taken, and decide whether the issue lives in orchestration or in the model.