I research runtime architectures for AI-enabled systems.
My focus is execution after inference: how intelligent behavior is authorized, executed, observed, and governed in long-running systems.
ICE is the environment where this is formalized through explicit constraints:
- Authority precedes execution (inference cannot authorize actions)
- State transitions are explicit and inspectable
- Traceability and evidence are structural invariants
- Determinism and reproducibility are mandatory properties
- External effects are boundary-governed
- Transitions are abstractly cost-accountable
What does it mean to run intelligent systems reliably over time?
This profile documents the work as it evolves.


