Runners overview
A runner is the parent process that owns a single ac7 session. It connects to the broker, fetches the briefing, binds an IPC socket, runs the SSE forwarder, dispatches MCP tools, and captures the agent’s HTTPS traffic. The agent itself runs as a child of the runner.
ac7 ships with two runners today:
ac7 claude-code— wraps Claude Code in a TUI you talk to in the same terminalac7 codex— runs OpenAI Codex headlessly undercodex app-server; you direct it through the broker
The runner abstraction is designed so adding a third (Cursor, Gemini CLI, your own) is mostly a matter of writing a spawn adapter and a notification sink. Everything the runner does on the broker side — auth, briefing, tools, trace, presence — is shared.
What every runner does
┌─────────────────────────┐
broker ──────── HTTP+SSE ───┤ ac7 runner │
│ │
│ • broker Client │
│ • briefing (cached) │
│ • SSE forwarder │
│ • objectives tracker │
│ • IPC server (UDS) │
│ • trace host (MITM) │
│ • busy reporter │
└────────────┬────────────┘
│ spawns
▼
┌─────────────────────────┐
│ the agent │
│ (claude / codex / ...) │
└────────────┬────────────┘
│ stdio MCP / stdio JSON-RPC
▼
┌─────────────────────────┐
│ ac7 mcp-bridge │
│ (claude-code only) │
└────────────┬────────────┘
│ IPC frames
▼
back to the runner
Concretely, on every runner startup:
- Authenticate with
--token/$AC7_TOKEN/~/.config/ac7/auth.json. - Fetch the briefing — the member’s name, role, permissions, teammates, team directive + brief, and currently-open objectives.
- Bind the IPC socket at
$TMPDIR/.ac7-runner-<pid>.sock(overridable). The agent’s MCP bridge subprocess connects back to this. - Start the trace host (unless
--no-trace): generate a per-session local CA, write the cert PEM to$TMPDIR/ac7-trace-ca-*.pemat0o600, bind a loopback HTTP CONNECT proxy on a random ephemeral port. - Run the SSE forwarder — subscribe to
/subscribe?name=<self>, route inbound chat / objective / channel events to the runner’s notification sink. - Spawn the agent with environment variables wired up
(
AC7_RUNNER_SOCKET, plus trace-host-injectedHTTPS_PROXY/NODE_EXTRA_CA_CERTS/CODEX_CA_CERTIFICATEdepending on runner). - Hold until the agent exits (or SIGINT/SIGTERM/runner shutdown), then tear down: flush traces, close sockets, restore configs, delete the CA PEM.
The IPC server is single-bridge: if a second bridge connects while one is already attached, the older one is dropped (default) or the newer one is rejected. Multi-agent setups run multiple runner processes.
What MCP tools the agent sees
The runner exposes a fixed set of MCP tools for every agent. Some are unconditional, some depend on the member’s permissions:
Always available
| Tool | Purpose |
|---|---|
roster | List teammates with role + connection state |
broadcast | Post to the team’s general channel |
send | DM a teammate |
channels_list | List named channels you have access to |
channels_post | Post to a specific channel |
recent | Fetch recent messages (general / DM / channel) |
objectives_list | Your own objectives by status |
objectives_view | Full state + audit log for one objective |
objectives_update | Transition active ↔ blocked (+ blockReason) |
objectives_discuss | Post into an objective’s thread |
objectives_complete | Mark done with required result |
fs_ls / fs_stat / fs_read / fs_write | Virtual filesystem |
fs_mkdir / fs_rm / fs_mv / fs_shared | Virtual filesystem |
Permission-gated
| Tool | Permission required |
|---|---|
objectives_create | objectives.create |
objectives_cancel | objectives.cancel (originator bypass) |
objectives_watchers | objectives.watch (originator bypass) |
objectives_reassign | members.manage |
The runner regenerates the tool list on every tools/list call
and emits notifications/tools/list_changed whenever the open
objective set changes — so objectives_list’s description carries
a fresh summary of the agent’s plate across context compaction.
See reference/mcp-tools for the full
input/output schemas.
claude-code vs codex
Both runners share everything above. They differ in how channel events reach the agent:
ac7 claude-code | ac7 codex | |
|---|---|---|
| Agent shape | Interactive TUI | Headless codex app-server |
| Stdio owner | The agent (TUI in your terminal) | The runner |
| MCP bridge | ac7 mcp-bridge spawned by claude via .mcp.json | ac7 mcp-bridge spawned by codex via ephemeral CODEX_HOME/config.toml |
| Push event delivery | notifications/claude/channel MCP notification (depends on claude/channel capability) | turn/start if thread is idle, turn/steer if active (JSON-RPC v2) |
| Auto-injected agent flags | --dangerously-skip-permissions, --dangerously-load-development-channels server:ac7, --append-system-prompt <briefing> | developerInstructions: <briefing>, approvalPolicy: never, sandbox: danger-full-access |
.mcp.json rewrite | Yes (backed up + restored) | No (ephemeral CODEX_HOME) |
| Custom CA env var | NODE_EXTRA_CA_CERTS (Node-style) | CODEX_CA_CERTIFICATE + SSL_CERT_FILE (reqwest-style) |
| Trace parsing | Anthropic /v1/messages typed entries | All opaque_http (typed parser is a follow-up) |
| Status bar | Bottom-row HUD (when node-pty is available) | One-line connection notice |
The structural takeaway: claude-code and codex receive ambient
director input through different mechanisms because their underlying
agent frameworks treat “new input mid-session” differently. ac7
hides that asymmetry behind the runner’s notification sink — the
sink for claude-code wraps events as MCP notifications; the sink
for codex bundles them and dispatches as turn/start or turn/steer.
For the full per-runner reference, see runners/claude-code and runners/codex.
Bring your own runner
The runner core (startRunner in packages/cli/src/runtime/runner.ts)
is transport-agnostic about MCP. It exposes a notificationSink
option that lets a runner override how broker events reach the
agent. Adding a third runner is roughly:
- Spawn adapter — locate the agent binary, set up its working environment (config dir, env vars, MCP bridge wiring), spawn it.
- Notification sink — implement
ForwarderNotificationSink.notification(args)so each broker event becomes whatever the agent’s framework calls “ambient input.” - Status mapping — flip the
Presencesignal betweenconnecting/online/offlinebased on whatever the agent reports as ready. - Shutdown — flush the sink, kill the agent, clean up any files the spawn adapter created.
The MCP bridge (ac7 mcp-bridge) is reusable as-is for any agent
framework that speaks stdio MCP. Frameworks that don’t (codex
speaks JSON-RPC) wire their own protocol on top of the same
runner core — see how the codex adapter routes broker events
through its channel sink in packages/cli/src/runtime/agents/codex/.
What the runner does NOT do
- Schedule the agent. A runner is one agent for the duration of one process. Multi-agent fleets are multi-process.
- Persist agent state. The runner is stateless across restarts. Briefing, objectives, history, and traces all live on the broker.
- Validate authorization. Permissions are enforced server-side on every mutating endpoint. The runner’s tool-list filtering is a UX optimization that hides tools the member couldn’t use anyway.
- Catch the agent’s stdout. The runner reserves stderr for its
own structured logs (
session-<component>-<pid>.logunder~/.cache/agentc7/); stdout belongs to the agent.