The Dual Principal Pattern: Designing Platforms Where Humans and Agents Share Logs
The future of observability isn’t about watching your agents — it’s about sharing truth with them.
The Premise: Human-Agent Asymmetry is a Design Flaw
Modern platforms are starting to host both human users and semi-autonomous agents (AI copilots, SWE-agents, observability bots). Yet, nearly all current systems log about agents, not with them. This asymmetry creates broken accountability and degrades user trust. We need a new architectural pattern.
This post defines a new primitive: the dual principal — two peers (human + agent) whose actions are logged, signed, and shared symmetrically for transparency, traceability, and trust.
1. The Dual Principal Pattern Explained
The pattern is composed of three core tenets:
- Symmetric Justification: Both human and agent must produce justifications for their actions ("why I acted").
- Shared Observability Channel: Both principals share a single, unified observability channel (e.g., via OpenTelemetry). Their logs are peered, not hierarchical.
- Mutual Auditability: The actions of either principal can be audited, replayed, and cross-verified against the logs of the other.
2. Case Study: Relay's SWE-Agent and RawAppLog Syncs
I'm currently implementing this pattern in Relay using peer Alloy collectors for my software-engineering agents. When an agent proposes a code change and a human approves it, both actions are signed and logged to the same stream. This has been invaluable for debugging complex co-creation workflows and establishing a clear chain of accountability, while working out in the open with SWE-agents.
3. Ethical Implications: From Oversight to Co-Agency
This pattern shifts the paradigm from "human-in-the-loop" oversight to genuine "human-agent co-agency." It makes AI auditability a fundamental design primitive of the system, not an afterthought.
By sharing a channel of truth, we build systems where trust is an emergent property of the architecture itself.