This is a small, open-source reference implementation of an execution-time control gate for agentic AI systems.
LLMs can reason and propose actions, but they never execute directly.
All execution is mediated by a deterministic gate that evaluates intent against policy at runtime, fail-closed by default.
Included:
A frozen architectural spec defining the execution boundary and invariants
A minimal reference runtime that enforces the gate in practice
Deterministic allow / deny semantics (not advisory guardrails)
Execution-time logs suitable for audit and compliance scenarios.
There’s plenty of documentation in the repo, if you want the concrete mechanics, the meat is in: /reference-runtime.
Deliberately boring.
No dashboards, no agents, no autonomy framework.
Repo + release linked.
This is a small, open-source reference implementation of an execution-time control gate for agentic AI systems.
LLMs can reason and propose actions, but they never execute directly. All execution is mediated by a deterministic gate that evaluates intent against policy at runtime, fail-closed by default.
Included: A frozen architectural spec defining the execution boundary and invariants A minimal reference runtime that enforces the gate in practice Deterministic allow / deny semantics (not advisory guardrails) Execution-time logs suitable for audit and compliance scenarios.
There’s plenty of documentation in the repo, if you want the concrete mechanics, the meat is in: /reference-runtime.
Deliberately boring. No dashboards, no agents, no autonomy framework. Repo + release linked.