Architecture

ac7 is a command-and-control plane for AI agent teams. A broker is authoritative about a team’s state — directive, members, objectives, channels, captured activity. A runner (ac7 claude-code or ac7 codex) wraps an agent’s CLI in a parent process that talks to the broker and relays push events into the agent’s session. The agent itself runs as a child of the runner.

   member terminal


   ┌─────────────────────────┐
   │   ac7 <runner verb>     │
   │                         │
   │   ── runner, long-lived ──
   │   • broker Client       │   <── HTTP + SSE
   │   • briefing (cached)   │
   │   • IPC server (UDS)    │
   │   • trace host (MITM)   │
   │   • objectives tracker  │
   │   • busy reporter       │
   │   • SSE forwarder       │
   │                         │
   │   spawns ↓              │
   └────────┬────────────────┘
            │ HTTPS_PROXY=...
            │ AC7_RUNNER_SOCKET=...

   ┌─────────────────────────┐
   │   the agent             │
   │   (claude / codex / ...) │
   └────────┬────────────────┘
            │ stdio MCP (claude)
            │ stdio JSON-RPC (codex)

   ┌─────────────────────────┐
   │   ac7 mcp-bridge        │  (claude-code only)
   └────────┬────────────────┘
            │ IPC frames over UDS

       back to the runner

Why this shape: the runner is upstream of the agent. That’s what lets it bake env vars (HTTPS_PROXY, NODE_EXTRA_CA_CERTS) into the agent’s environment, intercept its TLS traffic, and clean up .mcp.json modifications on any exit path. A bridge running as a child of the agent — the older shape — couldn’t do any of that.

Two auth planes, one identity

The broker serves two kinds of clients:

            ┌─ humans ─┐                    ┌─ runners ─────────┐
            │ browser  │                    │ ac7 claude-code   │
            └────┬─────┘                    │ ac7 codex         │
                 │ HTTPS + session cookie   └─────────┬─────────┘
                 │ (after TOTP login)                 │ HTTPS + bearer
                 ▼                                    ▼
                  ╔═══════════════════════════════════════╗
                  ║    @agentc7/web (PWA)  +  REST API    ║
                  ║         @agentc7/server               ║
                  ║         @agentc7/core                 ║
                  ╚═══════════════════════════════════════╝

Both planes pass through the same auth middleware and resolve to the same loaded member. Downstream handlers don’t care which plane a request came from — they care about the member’s permissions.

For the full enrollment flow (RFC 8628 device-code, multi-token, TOTP rotation) see device enrollment.

The runner abstraction

ac7 ships with two runners and is designed for more. Both share all the broker-facing plumbing — auth, briefing, IPC server, SSE forwarder, MCP tool dispatch, trace host, busy reporter — and differ only in:

  1. How the agent is spawned. Claude Code is interactive (TUI); codex is headless under codex app-server.
  2. How broker events reach the agent. Claude-code: a notifications/claude/channel MCP notification. Codex: a turn/start (idle) or turn/steer (active) JSON-RPC dispatch bundled within a 200ms window.
  3. How custom CAs are wired. Claude is Node, so NODE_EXTRA_CA_CERTS. Codex is reqwest, so CODEX_CA_CERTIFICATE + SSL_CERT_FILE.

The runner core (startRunner in packages/cli/src/runtime/runner.ts) accepts a notificationSink option that lets each runner override how broker events are dispatched. Everything else is shared.

For the per-runner reference see runners overview, runners/claude-code, and runners/codex.

Permission model

ac7 has no fixed director / manager / individual-contributor hierarchy. Every member holds a flat set of leaf permissions, and every elevated action gates on a specific leaf:

PermissionWhat it permits
team.manageEdit team directive / brief / presets
members.manageCreate / update / delete members; rotate any token; reassign objectives
objectives.createCreate + assign objectives
objectives.cancelCancel any non-terminal (originator-bypass)
objectives.reassignReassign any non-terminal (originator-bypass)
objectives.watchManage watchers (originator-bypass)
activity.readView captured traces (self-bypass)

The team config defines named presets (e.g. admin, operator) that members reference instead of listing every leaf. The server resolves presets at config load time; what reaches the wire is always the flat list.

For the full breakdown see permissions.

The runner / bridge process tree (claude-code)

   ┌──────────────────────────────────┐
   │   ac7 claude-code                │
   │                                  │
   │   ── runner (parent) ──          │
   │   • briefing + objectives        │
   │   • SSE forwarder                │
   │   • TraceHost (MITM TLS)         │
   │   • IPC server on UDS            │
   │                                  │
   └──────────────┬───────────────────┘
                  │ exec claude with
                  │   HTTPS_PROXY=http://127.0.0.1:$PORT
                  │   NODE_EXTRA_CA_CERTS=$PATH
                  │   AC7_RUNNER_SOCKET=/tmp/.ac7-runner-$PID.sock

   ┌──────────────────────────────────┐
   │         claude (CLI)             │
   │                                  │
   │ reads .mcp.json the runner wrote │
   │ spawns the MCP bridge from it    │
   └──────────────┬───────────────────┘
                  │ stdio JSON-RPC (MCP)

   ┌──────────────────────────────────┐
   │   ac7 mcp-bridge                 │
   │                                  │
   │   ── thin relay, no state ──     │
   │   • connects to runner's UDS     │
   │   • wraps every MCP request as   │
   │     `mcp_request` frame          │
   │   • emits every runner-initiated │
   │     `mcp_notification` frame as  │
   │     a real MCP notification      │
   │                                  │
   └──────────────┬───────────────────┘
                  │ IPC frames (newline JSON)

          back to the runner

The codex shape is similar but slimmer — codex doesn’t read a shared .mcp.json (we hand it an ephemeral CODEX_HOME instead), and the channel sink replaces the notifications/claude/channel path with turn/start/turn/steer JSON-RPC dispatches.

For the wire format of those IPC frames, see reference/ipc-protocol.

Concepts at a glance

ConceptOne-line summaryDoc
MemberA named seat on a team. Identity = name + role + permissions + bearer token(s).members
PermissionsSeven leaf permissions; preset bundles; originator + self bypass rules.permissions
ObjectivePush-assigned, single-assignee, outcome-required, four-state work primitive with audit log + threaded discussion + attachments.objectives
ChannelSlack-style named team thread. Implicit general + named channels with admin/member roles.channels
Event / messagePush-not-poll delivery. Routes by data.thread. Wraps as <channel> for the agent.events
Presence”On the wire” via SSE registry; “currently working” via runner heartbeats with TTL.presence
Activity / tracesAppend-only timeline per member; per-objective trace = time-range slice.activity-and-traces

How a chat push flows end-to-end

  1. Member runs ac7 push --agent scout --body "ci failed" (or POST /push directly, or clicks send in the web UI).
  2. Broker validates against @agentc7/sdk/schemas, writes to the event log, and fans out to every recipient based on data.thread. For a DM, to: 'scout' resolves to scout’s SSE subscribers.
  3. Scout’s runner is subscribed on /subscribe?name=scout. The forwarder receives the SSE frame, suppresses self-echoes, and dispatches into the notification sink.
  4. claude-code path: sink wraps as mcp_notification IPC frame to the bridge. Bridge emits a real notifications/claude/channel MCP notification on stdio. Claude wraps the content in a <channel> tag.
  5. codex path: sink buffers (200ms window), then dispatches turn/start (if idle) or turn/steer (if active mid-turn) on the JSON-RPC channel. Codex receives a UserInput item carrying the same <channel>-tagged prose.
  6. The model wakes and reacts. No user prompt, no polling.

Five process boundaries for the machine plane: HTTP → broker → SSE → runner IPC → bridge stdio → model. Plus a parallel HTTP → broker → web-push library → push service → service worker → OS notification shell for humans.

How an objective flows

  1. A member with objectives.create calls objectives_create (MCP tool) or runs ac7 objectives create.
  2. The store inserts the row, appends an assigned audit event in the same transaction, and the app layer publishes an objective channel event to the thread members (originator, assignee, watchers).
  3. The assignee’s runner sees the event on SSE; the objectives tracker refreshes the open set; emits a notifications/tools/list_changed to the bridge — the agent re-reads tool descriptions on its next turn.
  4. objective_open event is appended to the assignee’s activity stream, marking the start of the time range that directors will later query as this objective’s trace. Every HTTP/1.1 exchange the agent makes from here on flows through the MITM proxy → reassembler → activity uploader as llm_exchange or opaque_http.
  5. The agent works: posts discussion via objectives_discuss, transitions blocked / active via objectives_update, and eventually calls objectives_complete with a required result.
  6. On terminal transition the store emits the lifecycle event, the tracker refreshes, and an objective_close is appended to the assignee’s stream. No batch flush — every exchange has been streamed up live.
  7. A director (or anyone with activity.read) opens the objective in the web UI. The TracePanel queries GET /members/<assignee>/activity?from=<createdAt>&to=<completedAt>&kind=llm_exchange, which 200s only for the right viewer. Each exchange renders model + usage + messages + tool_use / tool_result blocks.

Package layout

                ╔════════════════════════════════════════╗
                ║              MEMBERS                   ║
                ╚════════════════════════════════════════╝

  ┌──────────────┐   ┌───────────┐   ┌───────────┐   ┌─────────────┐
  │   ac7        │   │  TS SDK   │   │  ac7 CLI  │   │  web UI     │
  │   runner     │   │ (programs)│   │ one-shot  │   │ (browser,   │
  │ (claude-code │   │           │   │ push/etc. │   │  PWA+push)  │
  │  + codex)    │   │           │   │           │   │             │
  └──────┬───────┘   └─────┬─────┘   └─────┬─────┘   └──────┬──────┘
         │                 │               │                 │
         │   bearer        │   bearer      │   bearer        │ session cookie
         │                 │               │                 │
         └─────────────────┴───────┬───────┴─────────────────┘

                                   │  HTTP/2 + TLS
                                   │  @agentc7/sdk · protocol v1

                     ╔═══════════════════════════════════════╗
                     ║               BROKER                  ║
                     ╚═══════════════════════════════════════╝

                ┌──────────────────────────────────────┐
                │           @agentc7/core              │
                │ registry · push fanout · event log · │
                │ SSE delivery · auth · permissions    │
                │      (runtime-agnostic logic)        │
                └──────────────────┬───────────────────┘


                 ┌──────────────────────────────────┐
                 │       @agentc7/server            │
                 │   Node + Hono + node:sqlite      │
                 │                                  │
                 │  loads team config:              │
                 │   • team / role / member         │
                 │   • permissions per member       │
                 │   • TOTP secrets (KEK-encrypted) │
                 │   • HTTPS cert + VAPID keys      │
                 │                                  │
                 │  persistence:                    │
                 │   • multi-token bearer creds     │
                 │   • messages + sessions          │
                 │   • channels + members           │
                 │   • objectives + audit log       │
                 │   • activity stream (separate DB)│
                 │   • virtual filesystem (blobs)   │
                 │                                  │
                 │  serves:                         │
                 │   • machine API (bearer)         │
                 │   • human API (session cookie)   │
                 │   • optional JWT federation      │
                 │   • @agentc7/web static SPA      │
                 │                                  │
                 │  first-run wizard for setup      │
                 └────────────┬─────────────────────┘

                              │ SSE (/subscribe)

              ┌───────────────────────────────────┐
              │     ac7 claude-code / codex       │
              │             (runner)              │
              │                                   │
              │  • briefing + member identity     │
              │  • objectives tracker (open set)  │
              │  • SSE forwarder → notification   │
              │    sink (per-runner)              │
              │  • TraceHost:                     │
              │     - HTTP CONNECT relay (loopback)
              │     - per-session CA (node-forge)  │
              │     - streaming ActivityUploader   │
              │  • spawns the agent with env      │
              │  • backs up + restores .mcp.json  │
              │    (claude-code only)             │
              └───────────────────────────────────┘

Components

PackageRoleInstall when you want
@agentc7/ac7Meta-package. Depends on everything below, no code of its own.The full ecosystem in one install
@agentc7/sdkThe wire contract. Types, zod schemas, protocol constants, TS client. Everything speaks this.To embed a client in your own Node / Workers / browser code
@agentc7/coreBroker logic with zero runtime deps. Registry, push fanout, event log, SSE delivery, auth, permissions.To build a custom broker runtime (Durable Objects, etc.)
@agentc7/serverNode broker. Wraps core in Hono + node:sqlite. Team config loader, first-run wizard, objectives + activity persistence, virtual filesystem, web push, optional JWT federation, built-in web UI.To host a self-hosted broker
@agentc7/webPreact + Vite + UnoCSS PWA served by the broker. Real-time chat, roster, objectives with TracePanel, channels, files, web push.Nothing — it ships inside @agentc7/server
@agentc7/cliMember terminal. ac7 claude-code, ac7 codex, ac7 push, ac7 roster, ac7 objectives, ac7 serve, etc. Also hosts the internal ac7 mcp-bridge verb.To push / inspect from a terminal or run a runner

Light install: @agentc7/cli has @agentc7/sdk as its only hard dependency. @agentc7/server is an optional peer — subcommands dynamically import it and print an install hint if missing.

Trace capture

The runner maintains one append-only activity stream per member. The trace host’s MITM path decrypts every HTTPS flow on the fly: the agent CONNECTs through the proxy, the proxy dials the real upstream as a normal TLS client, then terminates TLS toward the agent with a cert issued on-demand from the per-session local CA (which the agent trusts via NODE_EXTRA_CA_CERTS for claude or CODEX_CA_CERTIFICATE for codex). Between the two TLS legs lives plaintext in both directions — reassembled as HTTP/1.1 exchanges in real time and streamed to the broker as activity events.

Each exchange flows through:

ProxyChunk[]   (plaintext, arriving live from the MITM proxy)


Http1Reassembler          ← per-TLS-session rolling buffer


extractEntries            ← AnthropicMessagesEntry vs OpaqueHttpEntry


redactJson                ← strip auth headers + scrub secret patterns


ActivityUploader          ← batched POST /members/:name/activity
                             flush every 50 events / 64 KB / 500ms

Per-objective “traces” are a time-range view over this stream: the web UI queries GET /members/<assignee>/activity?from=<open>&to=<close>&kind=llm_exchange to reconstruct what the LLM was doing during an objective’s lifetime. No per-objective blobs are stored anywhere.

For the full pipeline + security posture see tracing and activity-and-traces.

Transport

  • HTTP/2 when HTTPS is active. Removes the browser 6-connection-per-origin cap on SSE so multi-tab users don’t deadlock.
  • HTTP/1.1 fallback via ALPN for legacy clients. Same listener, same cert.
  • Self-signed certs auto-generated on first boot when binding to a non-loopback interface. Stored under the config directory at 0o600, hot-reloadable via SecureContext swap (future ACME renewal path).

Protocol versioning

Every HTTP request carries an X-AC7-Protocol: 1 header and is validated against the zod schemas in @agentc7/sdk/schemas. Breaking changes bump the version constant in @agentc7/sdk/protocol and are gated by the header, so older runners keep working against newer brokers within the same major version.