Skip to main content
HUMΛN
Developer
Developer

Getting Started with HUMΛN MCP

HUMΛN Team··8 min·Technical (Developers)

The Model Context Protocol turned LLMs into something that can use tools. HUMΛN MCP turns an LLM into something that can use a governed colleague — the Companion — and through it, every governed primitive HUMΛN exposes: KB search, intent resolution, capability discovery, and human.call for actually doing things on a human's behalf.

This post is a 2-minute setup followed by your first three calls.

What you get

@human/mcp is one npm package that exposes three layers to any MCP-compatible client (Cursor, Claude Code, Continue, Zed, Cline, Goose):

Tier Tool When to use
1 human.ask "I want X" — let the Companion classify, intent-shape, and answer or propose action.
1 human.companion.kb_search Direct semantic search over Canon + your delegated KB.
2 human.intent Compile a brief into a capability resolution plan.
2 human.capability.discover Find capabilities that match a description.
3 human.call Execute a capability with delegation, risk gate, and provenance.
3 12+ raw REST proxies Direct API access for everything else.

The headline: start at Tier 1, drop down when you need control. human.ask is enough for 80% of integration work. The lower tiers exist so you never hit a ceiling.

Setup — Cursor (local stdio)

Add to ~/.cursor/mcp.json:

{
  "mcpServers": {
    "human": {
      "command": "npx",
      "args": ["-y", "@human/mcp"],
      "env": {
        "HUMAN_API_URL": "https://api.haio.run",
        "HUMAN_API_KEY": "<your delegation token>"
      }
    }
  }
}

Restart Cursor. Open the chat, switch to Agent mode, and you should see the human server in the tool picker. Mint a delegation with kb:read:public companion:chat for a safe first run.

Setup — Claude Code (hosted SSE)

Claude Code can't spawn npx, so it uses the hosted endpoint:

{
  "mcpServers": {
    "human": {
      "url": "https://mcp.haio.run/sse"
    }
  }
}

First connect triggers an OAuth/PKCE flow: a browser tab opens at console.haio.run, you confirm scopes, the worker mints a session-scoped delegation, and the SSE stream lights up.

Your first call: human.ask

human.ask({ text: "How do I add a new connector to HUMΛN?" })

Response:

{
  "classification": "question",
  "text": "Use `human extension create --template connector`. The CLI scaffolds...",
  "citations": [
    { "uri": "human://kb/106_connector_guide", "title": "Connectors guide" }
  ]
}

Now ask something fuzzy:

human.ask({ text: "I want to reduce the noise from my agents on weekends" })

You'll get back a classification: "intent" with an intent_action block — a structured proposal you can execute (with approval) or refine. That's the magic: the same tool answers questions and shapes intents.

Your first KB read

human.companion.kb_search({ query: "delegation scope vocabulary", limit: 5 })

The Companion uses Canon RAG (the foundational knowledge base, vector-indexed) plus any KB sources your delegation grants. Public scopes see the public corpus; org-tier delegations see internal docs you've been granted.

Minting a production delegation

For local dev, use a long-lived dev token. For production, mint a scoped, time-bound delegation:

human delegation mint \
  --to "did:agent:my-cursor" \
  --scope "kb:read:public companion:chat human_api:agents:invoke" \
  --expires-in 30d

Drop the token into your MCP config and you're done. The HUMΛN side enforces scope on every call; if your agent tries something outside the grant, you get a 403 with a remediation hint, not a hang.

What's next

  • Doc 2 — Using Companion from Claude Code (deep dive on hosted SSE).
  • Doc 3 — Delegation Scope Design Guide (how human.call mode relates to minted scopes).
  • Doc 5 — Intent Routing Architecture (how human.ask works under the hood).

The whole thing is one npm install away.