Skip to main content
HUMΛN
Developer
Developer

Using Companion from Claude Code

HUMΛN Team··12 min·Technical (Developers)

Claude Code is great at writing code. It is less great at understanding your governance, your delegation policies, your protocol decisions. That's where the Companion comes in. This post walks the full hosted MCP integration end-to-end — what happens, why it happens, and how to build proposal→approve→execute loops on top of it.

The stdio gap (and why hosted SSE solves it)

Cursor spawns local processes — it can run npx @human/mcp and pipe stdio. Claude Code (and most cloud-hosted assistants) cannot. They expect an HTTP/SSE endpoint with auth they can negotiate.

@human/mcp-worker is a Cloudflare Worker at mcp.haio.run that speaks MCP over Server-Sent Events, with OAuth 2.1 + PKCE for browser-based delegation minting. Your config is four lines:

{
  "mcpServers": {
    "human": {
      "url": "https://mcp.haio.run/sse"
    }
  }
}

That's the whole client side. No client secret, no shared key, no API token in your dotfiles.

OAuth flow walkthrough

  1. Claude Code opens the SSE endpoint with no token. The Worker responds with WWW-Authenticate: Bearer realm="..." plus the OAuth metadata URL.
  2. Claude Code generates a PKCE code_verifier, hashes it, and redirects you to https://console.haio.run/mcp/consent?code_challenge=....
  3. You log into the Console (passkey, no password). The consent screen shows the requesting client name, the scopes it asked for, and the org context.
  4. You click Approve. The Console mints a delegation token bound to that scope set, redirects back with an auth_code.
  5. Claude Code exchanges the code (with the original code_verifier) for a session-scoped delegation token.
  6. The SSE stream opens. Every subsequent message is JSON-RPC over SSE, signed with that session.

Revoking is symmetric: Console → Settings → MCP Access → Revoke. The Worker drops the KV record and the API marks the session revoked. The next message fails with 401.

What Companion knows by default

The Companion has four layers of knowledge:

Layer Source Always available?
0 — Canon The HUMΛN Canon corpus, vector-indexed Yes, every session
1 — Public KB 58 Public-classified docs Yes
2 — Org KB Confluence/Notion/Drive your org installed Only if your delegation includes the org
3 — Personal KB Your Notion/Drive Only if your delegation includes you

Layer 0+1 are the baseline — the Companion always has the Canon to ground answers in, even if the delegation is narrow. This prevents the "Companion hallucinates because it has no docs" failure mode that plagues most LLM-as-platform setups.

Multi-turn sessions

The hosted SSE session is durable for the lifetime of the OAuth grant. Companion remembers the conversation, the current intent (if you're shaping one), and any pending signals across tool calls. This matters because Tier 1 human.ask can hand back classification: "fuzzy" and ask a clarifying question — and you want the next call to know what was asked.

The classification + intent_action pattern

Every human.ask response carries a classification and (when relevant) an intent_action block:

{
  "classification": "intent",
  "text": "I'll set up a workforce schedule that mutes non-critical agent signals on weekends.",
  "intent_action": {
    "tool_id": "workforce.schedule.create",
    "params": { "rule": "mute non-critical", "window": "weekends" },
    "autonomy": "propose",
    "requires_approval": true,
    "reversible": true,
    "human_readable": "Mute non-critical agent signals on weekends"
  }
}

Your Claude Code agent reads that block and:

  1. Shows the human_readable summary in chat.
  2. Asks the user to approve.
  3. On approval, calls human.call({ tool_id, params }).
  4. The HUMΛN side enforces delegation, runs the risk gate, executes, and returns a provenance receipt.

That's the whole loop. The Companion proposes, the human approves, HUMΛN executes — and every step is logged.

Provenance is automatic

Every Worker tool call is logged with:

  • The session ID (so you can group by integration).
  • The tool name and (redacted) params.
  • The delegation chain (who delegated to whom, with what scope).
  • The outcome (success/failure, duration, error class).

The Console's MCP Access page surfaces this for end users. The Cloud Admin surface surfaces it for HUMΛN team. Your auditor doesn't have to ask "did the AI do this?" — the receipt is right there.

Practical: building a Claude Code workflow

A recipe we've shipped internally:

You are a HUMΛN-aware engineering assistant. Before suggesting code:

1. Call `human.companion.kb_search` to look up the relevant Canon doc.
2. Cite the doc id in your answer.
3. If the user asks for an action (deploy, configure, scaffold), call `human.ask` to get a proposal.
4. Show the human-readable summary, ask for approval.
5. On approval, call `human.call` with the intent_action params.

That's it. The agent inherits HUMΛN's governance for free.

What's next

  • Doc 3 — Delegation Scope Design Guide.
  • Doc 5 — Intent Routing Architecture.
  • AI corpus: /ai/articles/hosted-mcp-quickstart.md for the four-line config.