Skip to main content
HUMΛN
Developer
Developer

KB Access Control Explainer

HUMΛN Team··9 min·Technical (Builders + Operators)

When an LLM platform says "I can answer questions about your docs," the next question is always: whose docs, and at what classification? Most platforms answer that with embarrassed silence. HUMΛN answers it with a classification model that the Companion cannot violate, even if it tries.

This post is the explainer for builders, operators, and anyone with an audit checklist.

Two personas, one mechanism

There are two people who ask about KB access:

  1. A builder building on HUMΛN — wants to know what the Companion can see when their AI client connects via MCP.
  2. A HUMΛN dev building HUMΛN — wants to know which internal docs the Companion can ground answers in for their team.

Both personas share the same enforcement mechanism: every KB doc carries a classification, and every delegation token carries the right to read certain classifications. The Companion's KB layer (@human/companion-kb) joins them at query time. There is no separate "admin override" path.

The four classifications

Classification Who can read it What it contains
Public Anyone with kb:read:public (everyone, by default) The 58 Public docs + the Canon RAG corpus + product marketing
Internal HUMΛN team delegations + customer admins under explicit grant Architecture, implementation status, design decisions, KB 100s/200s/300s
ExternalPartner Designated partner DIDs with kb:read:partner Specific partner-shared docs (e.g. integration guides not yet GA)
FoundersOnly Founders' passports only Strategic plans, sensitive financials, board materials

FoundersOnly is never exposed through MCP. The classification check happens before the embedding is even retrieved from pgvector — there's no path where an MCP client can query a FoundersOnly doc, regardless of how clever the prompt injection.

What "Public" actually means

When a Cursor user with no org grant says "how do I add a connector?", the Companion has access to:

  • The full Canon corpus (kb/0–99 — foundational principles, protocol architecture, principles).
  • All 58 Public-classified KB docs.
  • Public AI corpus (apps/website/public/ai/articles/).
  • Public OpenAPI spec.
  • Public marketing pages.

That's a lot — enough to answer the vast majority of integration questions without ever needing org context.

The Canon RAG layer is special: it's pre-seeded into every Companion session at the system-prompt level. The Companion always knows what HUMΛN is before it answers a single question. This is what stops the "Companion confabulates because it has no docs" failure mode.

What "Internal" means (and how it's gated)

When a HUMΛN team member's delegation has kb:read:internal, the Companion can additionally see:

  • Architecture deep-dives (KB 100s — capability graph, intent compiler, autonomy profiles).
  • Implementation status (KB 147 — what's built, what's stubbed).
  • Design decisions and ADRs (KB 200s).
  • Internal runbooks.

A customer admin can request kb:read:internal for a narrow slice (e.g. integration architecture for a specific connector). They cannot request the full corpus. The grant model is least-privilege: explicit, time-bound, scoped.

What "FoundersOnly" means (and why it never moves)

Strategic plans, board materials, sensitive financial models, individual passport-level audit trails for HUMΛN execs — none of this is reachable from any MCP surface. The Companion doesn't even know these docs exist when serving an MCP client. If you ask "show me HUMΛN's Series A plan," it will truthfully say "I don't have access to that."

This is a hard wall, not a soft suggestion.

How org KB plugs in

When an org admin installs a connector (Confluence, Notion, Drive, SharePoint), HUMΛN ingests the content into the org's vector store. From then on, members of that org with kb:read:org see those docs alongside Public Canon. Non-members see only Public.

The same model applies to personal KB — connect your own Notion, and your Companion sessions can ground answers in your notes. Other users in the same org cannot.

This is implemented as a join: the KB query plan is (Public Canon) ∪ (Org KB if delegated) ∪ (Personal KB if delegated), with a final classification filter on every result before it leaves the API.

What about prompt injection?

A clever user might try to coax the Companion into surfacing Internal docs from a Public-only delegation. Two defenses:

  1. Pre-retrieval gate. The classification check happens before the vector search runs. The embeddings for Internal docs are not in the candidate set for a Public delegation. There's nothing to inject around.
  2. Post-retrieval filter. Even if a doc somehow leaked into the candidates, the response builder filters out anything above the delegation's classification level before composing the answer.

So the Companion isn't relying on its own willpower to refuse. The platform doesn't give it the option.

Auditability

Every KB read is logged with: the passport DID, the delegation ID, the doc ID, the classification, the query, the timestamp. An auditor can reconstruct exactly what any agent saw. The provenance ledger is append-only and signed.

Practical: choosing a classification for your own docs

If you're publishing internal docs into HUMΛN (e.g. via a connector or the KB API):

  • Default to Internal unless you explicitly want it Public.
  • Use ExternalPartner for docs you're sharing with named partners but not yet GA.
  • Use FoundersOnly only for material that should never reach an LLM under any circumstance.

When in doubt, classify up (more restrictive). It's easier to declassify later than to recall a leaked doc.

Summary

The Companion is governed, not just trained. The delegation determines what it can see; the classification determines who is in the candidate set; the audit log proves what actually happened. That's the trust model.

Next

  • Doc 5 — Intent Routing Architecture.
  • AI corpus: /ai/articles/guardrails-and-boundary-contracts.md.