Runners overview

A runner is the parent process that owns a single ac7 session. It connects to the broker, fetches the briefing, binds an IPC socket, runs the SSE forwarder, dispatches MCP tools, and captures the agent’s HTTPS traffic. The agent itself runs as a child of the runner.

ac7 ships with two runners today:

  • ac7 claude-code — wraps Claude Code in a TUI you talk to in the same terminal
  • ac7 codex — runs OpenAI Codex headlessly under codex app-server; you direct it through the broker

The runner abstraction is designed so adding a third (Cursor, Gemini CLI, your own) is mostly a matter of writing a spawn adapter and a notification sink. Everything the runner does on the broker side — auth, briefing, tools, trace, presence — is shared.

What every runner does

                                ┌─────────────────────────┐
   broker  ──────── HTTP+SSE ───┤  ac7 runner             │
                                │                         │
                                │  • broker Client        │
                                │  • briefing (cached)    │
                                │  • SSE forwarder        │
                                │  • objectives tracker   │
                                │  • IPC server (UDS)     │
                                │  • trace host (MITM)    │
                                │  • busy reporter        │
                                └────────────┬────────────┘
                                             │ spawns

                                ┌─────────────────────────┐
                                │  the agent              │
                                │  (claude / codex / ...) │
                                └────────────┬────────────┘
                                             │ stdio MCP / stdio JSON-RPC

                                ┌─────────────────────────┐
                                │  ac7 mcp-bridge         │
                                │  (claude-code only)     │
                                └────────────┬────────────┘
                                             │ IPC frames

                                  back to the runner

Concretely, on every runner startup:

  1. Authenticate with --token / $AC7_TOKEN / ~/.config/ac7/auth.json.
  2. Fetch the briefing — the member’s name, role, permissions, teammates, team directive + brief, and currently-open objectives.
  3. Bind the IPC socket at $TMPDIR/.ac7-runner-<pid>.sock (overridable). The agent’s MCP bridge subprocess connects back to this.
  4. Start the trace host (unless --no-trace): generate a per-session local CA, write the cert PEM to $TMPDIR/ac7-trace-ca-*.pem at 0o600, bind a loopback HTTP CONNECT proxy on a random ephemeral port.
  5. Run the SSE forwarder — subscribe to /subscribe?name=<self>, route inbound chat / objective / channel events to the runner’s notification sink.
  6. Spawn the agent with environment variables wired up (AC7_RUNNER_SOCKET, plus trace-host-injected HTTPS_PROXY / NODE_EXTRA_CA_CERTS / CODEX_CA_CERTIFICATE depending on runner).
  7. Hold until the agent exits (or SIGINT/SIGTERM/runner shutdown), then tear down: flush traces, close sockets, restore configs, delete the CA PEM.

The IPC server is single-bridge: if a second bridge connects while one is already attached, the older one is dropped (default) or the newer one is rejected. Multi-agent setups run multiple runner processes.

What MCP tools the agent sees

The runner exposes a fixed set of MCP tools for every agent. Some are unconditional, some depend on the member’s permissions:

Always available

ToolPurpose
rosterList teammates with role + connection state
broadcastPost to the team’s general channel
sendDM a teammate
channels_listList named channels you have access to
channels_postPost to a specific channel
recentFetch recent messages (general / DM / channel)
objectives_listYour own objectives by status
objectives_viewFull state + audit log for one objective
objectives_updateTransition active ↔ blocked (+ blockReason)
objectives_discussPost into an objective’s thread
objectives_completeMark done with required result
fs_ls / fs_stat / fs_read / fs_writeVirtual filesystem
fs_mkdir / fs_rm / fs_mv / fs_sharedVirtual filesystem

Permission-gated

ToolPermission required
objectives_createobjectives.create
objectives_cancelobjectives.cancel (originator bypass)
objectives_watchersobjectives.watch (originator bypass)
objectives_reassignmembers.manage

The runner regenerates the tool list on every tools/list call and emits notifications/tools/list_changed whenever the open objective set changes — so objectives_list’s description carries a fresh summary of the agent’s plate across context compaction. See reference/mcp-tools for the full input/output schemas.

claude-code vs codex

Both runners share everything above. They differ in how channel events reach the agent:

ac7 claude-codeac7 codex
Agent shapeInteractive TUIHeadless codex app-server
Stdio ownerThe agent (TUI in your terminal)The runner
MCP bridgeac7 mcp-bridge spawned by claude via .mcp.jsonac7 mcp-bridge spawned by codex via ephemeral CODEX_HOME/config.toml
Push event deliverynotifications/claude/channel MCP notification (depends on claude/channel capability)turn/start if thread is idle, turn/steer if active (JSON-RPC v2)
Auto-injected agent flags--dangerously-skip-permissions, --dangerously-load-development-channels server:ac7, --append-system-prompt <briefing>developerInstructions: <briefing>, approvalPolicy: never, sandbox: danger-full-access
.mcp.json rewriteYes (backed up + restored)No (ephemeral CODEX_HOME)
Custom CA env varNODE_EXTRA_CA_CERTS (Node-style)CODEX_CA_CERTIFICATE + SSL_CERT_FILE (reqwest-style)
Trace parsingAnthropic /v1/messages typed entriesAll opaque_http (typed parser is a follow-up)
Status barBottom-row HUD (when node-pty is available)One-line connection notice

The structural takeaway: claude-code and codex receive ambient director input through different mechanisms because their underlying agent frameworks treat “new input mid-session” differently. ac7 hides that asymmetry behind the runner’s notification sink — the sink for claude-code wraps events as MCP notifications; the sink for codex bundles them and dispatches as turn/start or turn/steer.

For the full per-runner reference, see runners/claude-code and runners/codex.

Bring your own runner

The runner core (startRunner in packages/cli/src/runtime/runner.ts) is transport-agnostic about MCP. It exposes a notificationSink option that lets a runner override how broker events reach the agent. Adding a third runner is roughly:

  1. Spawn adapter — locate the agent binary, set up its working environment (config dir, env vars, MCP bridge wiring), spawn it.
  2. Notification sink — implement ForwarderNotificationSink.notification(args) so each broker event becomes whatever the agent’s framework calls “ambient input.”
  3. Status mapping — flip the Presence signal between connecting / online / offline based on whatever the agent reports as ready.
  4. Shutdown — flush the sink, kill the agent, clean up any files the spawn adapter created.

The MCP bridge (ac7 mcp-bridge) is reusable as-is for any agent framework that speaks stdio MCP. Frameworks that don’t (codex speaks JSON-RPC) wire their own protocol on top of the same runner core — see how the codex adapter routes broker events through its channel sink in packages/cli/src/runtime/agents/codex/.

What the runner does NOT do

  • Schedule the agent. A runner is one agent for the duration of one process. Multi-agent fleets are multi-process.
  • Persist agent state. The runner is stateless across restarts. Briefing, objectives, history, and traces all live on the broker.
  • Validate authorization. Permissions are enforced server-side on every mutating endpoint. The runner’s tool-list filtering is a UX optimization that hides tools the member couldn’t use anyway.
  • Catch the agent’s stdout. The runner reserves stderr for its own structured logs (session-<component>-<pid>.log under ~/.cache/agentc7/); stdout belongs to the agent.