Beyond Client-Server
Current AI agent architectures rely on centralized coordination. Agents communicate through cloud APIs, creating bottlenecks and dependency. The Agent Communication Protocol (ACP) takes a different approach: agents as local processes communicating through shared memory, coordinated by the Trinity runtime.
THE ACP INSIGHT
Agents don't need to talk to a server to talk to each other. They need a shared memory buffer and a common language. ACP provides both—local-first coordination with cryptographic provenance.
The Protocol Stack
ACP operates in layers, each handling different aspects of agent coordination:
Agent Layer
Trae, ByteDance, Claude Code, opencode—each agent implements ACP client interface
Message Layer
Structured communication format with intent, payload, and provenance hash
Coordination Layer
Trinity router directs messages between agents based on theater availability
Transport Layer
Shared memory buffers via /dev/shm—zero-copy, zero-network, zero-latency
Genesis Layer
Hardware-bound identity ensures agents cannot be cloned or replayed on different machines
Integrated Agents
ACP is not theoretical—it is implemented. Multiple agent systems already communicate through the protocol:
Trae Agent
ByteDance development environment agent. Baked directly into Trinity runtime for code generation and analysis.
Claude Code Agent
Anthropic's Claude optimized for local execution. Runs on dGPU with provenance tracking on every suggestion.
opencode CLI Agent
Command-line interface agent for headless operation. Same capabilities, terminal access.
Custom Agents
User-defined agents through ACP SDK. Build your own and integrate into the Trinity ecosystem.
Message Structure
Every ACP message carries provenance:
- Intent — What the agent wants to accomplish
- Payload — Data being transmitted (code, analysis, question)
- Genesis Hash — Hardware-bound identity of sender
- Timestamp — Nanosecond-precision ordering
- Theater Preference — iGPU/dGPU/CPU routing hint
- Response Channel — Shared memory location for reply
Decentralized by Design
ACP has no central coordinator. Agents discover each other through shared memory registration. Messages route through the Trinity theater system based on real-time availability. If one agent fails, others continue. If one theater overheats, work migrates. The system is resilient because it has no single point of failure.
The exact message format, the discovery protocol, the failure recovery mechanisms—those remain within the protected core. We present the architecture. The implementation is the sauce.