22. HUMANOS ORCHESTRATION CORE

Deep Technical Blueprint: The Coordination Layer for Human-AI Collaboration


HumanOS is the most critical and least visible part of HUMAN.

It is not a product, an app, a dashboard, or a UX.

HumanOS is the coordinating intelligence that enforces:

  • Organizational policies about what AI agents can do
  • Organizational policies about which humans handle which work
  • Routing rules based on capability requirements
  • Escalation triggers when uncertainty is high
  • Cryptographic logging of all decisions
  • Clear ownership and accountability

HumanOS implements the policies and routing logic your organization defines. It doesn't make strategic decisions for you—it enforces the rules you configure and maintains proof that they were followed.

Core HumanOS Services (Updated 2025-12-19):

  • Routing: Capability-based task assignment
  • Policy Engine: Governance and boundary enforcement
  • Provenance: Cryptographic audit trails
  • Reasoning Service: AI model orchestration and routing (see 141_reasoning_service_architecture.md)
  • Identity: Passport integration and delegation
  • Escalation: Human-in-the-loop coordination

The Capability Resolution Engine (CRE)

HumanOS includes a Capability Resolution Engine (CRE): the component that takes task requirements + risk/policy constraints + the Capability Graph and outputs a routing decision (including fallback chains and escalation requirements).

Canonical note: the protocol’s capability engine is the joint behavior of the Capability Graph Engine + HumanOS CRE. Academy may be used as a guided training surface, but the capability engine runs (and updates from governed events) even when Academy is absent (e.g., self-hosted / air-gapped deployments using local work and internal training signals).


MVP: ROUTING-LITE (Foundation Phase, Week 1-2)

See: 15_protocol_foundation_and_build_sequence.md for the canonical build sequence.

Before building the full HumanOS specification below, we build Routing-Lite — the minimum viable orchestration that enables real capability-based routing and escalation from Day 1.

What Routing-Lite Includes

Component Foundation (Week 1-2) Full Spec (Wave 2+)
Routing Capability-filtered assignment Full semantic routing, dynamic optimization
Risk Simple classification (low/medium/high) Multi-dimensional risk scoring
Escalation Manual triggers + basic rules AI-detected uncertainty, automatic escalation
Fallback Static fallback chains Dynamic reallocation, load balancing
Provenance Log all decisions with signature Full provenance chain, ledger anchoring
Policy Hardcoded safety rules Dynamic policy engine, per-enterprise config

Routing-Lite Implementation

// Routing-Lite: Minimum viable orchestration
interface RoutingRequest {
  taskId: string;
  requiredCapabilities: string[];
  riskLevel: 'low' | 'medium' | 'high';
  requestorDid: string;
  context: Record<string, any>;
}

interface RoutingDecision {
  taskId: string;
  assignedTo: string;            // Passport DID of assignee
  reason: string;                // Why this person
  fallbackChain: string[];       // Who to try if they decline/fail
  timestamp: Date;
  signature: string;             // Signed by HumanOS
}

// Core operations
async function routeTask(request: RoutingRequest): Promise<RoutingDecision> {
  // 1. Filter by capability
  const qualified = await findQualified(request.requiredCapabilities);
  
  // 2. Apply risk-based filtering
  const safeOptions = filterByRisk(qualified, request.riskLevel);
  
  // 3. Select best available
  const selected = selectByAvailability(safeOptions);
  
  // 4. Build fallback chain
  const fallbacks = safeOptions.filter(u => u !== selected).slice(0, 3);
  
  // 5. Log and sign decision
  const decision = await createDecision(request, selected, fallbacks);
  await logProvenance(decision);
  
  return decision;
}

// Escalation triggers (hardcoded for MVP)
const ESCALATION_TRIGGERS = [
  { condition: 'confidence < 0.6', action: 'escalate_to_senior' },
  { condition: 'risk === "high"', action: 'require_human_approval' },
  { condition: 'time_elapsed > 30m', action: 'reassign_with_urgency' },
];

Provenance-Lite (Part of Routing-Lite)

// Every decision is logged and signed
interface ProvenanceEvent {
  eventId: string;
  eventType: 'routing_decision' | 'task_completion' | 'escalation' | 'override';
  taskId: string;
  actorDid: string;              // Who made the decision
  subjectDid?: string;           // Who the decision affects
  timestamp: Date;
  details: Record<string, any>;
  signature: string;             // Ed25519 signature
}

async function logProvenance(event: ProvenanceEvent): Promise<void> {
  // Sign the event
  event.signature = await signWithSystemKey(event);
  
  // Append-only storage (no updates, no deletes)
  await db.provenanceEvents.insert(event);
}

// Query for audit
async function getProvenanceChain(taskId: string): Promise<ProvenanceEvent[]> {
  return db.provenanceEvents.findByTaskId(taskId).orderBy('timestamp');
}

Why Routing-Lite Matters

Without Routing-Lite:

  • "Intelligent routing" is just round-robin
  • No escalation when AI is uncertain
  • No audit trail for decisions

With Routing-Lite:

  • Tasks route to qualified reviewers based on capability
  • Escalation happens when risk is high
  • Every decision is logged and signed

Routing-Lite is the foundation. The full spec below is the vision.


HUMANOS VS WORKFLOW ENGINES: TECHNICAL DIFFERENTIATION

Before diving into the full specification, it's critical to understand what HumanOS is not.

HumanOS is often compared to workflow automation engines and agent orchestration platforms. These comparisons fundamentally misunderstand the category.

What Workflow Engines Are (n8n, Make, Zapier, Temporal)

Workflow engines are glue.

They excel at:

  • Connecting systems via APIs
  • Triggering actions based on events
  • Moving data between tools
  • Automating repetitive tasks

Architecture:

  • Nodes/flows as primitives
  • API keys for authentication
  • If/else logic for routing
  • Logs for debugging
  • Single-org scope

Use case: "When webhook fires, call API A, update database B, send notification C"

What they DON'T provide:

  • Identity: Who is acting? (just "the system")
  • Delegation: Under what authority? (implicit)
  • Capability: Is this actor qualified? (not considered)
  • Governance: What's allowed/forbidden? (no policy engine)
  • Provenance: Legal-grade audit trail? (no)
  • Multi-org trust: Cross-organizational accountability? (no)

What HumanOS Is (Responsibility-Native Orchestration)

HumanOS is a responsibility fabric.

Workflow engines wire tools together. HumanOS wires responsibility between humans and agents.

Architecture:

  • Agents with Passport identities as primitives
  • Cryptographic delegation for authority
  • Capability Graph for qualification
  • Policy engine for governance
  • Attested ledger for provenance
  • Multi-org trust fabric

Use case: "An agent can approve refunds, update EHRs, or send legal emails — and we must prove who did it, why, under whose authority, and within what constraints"

Side-by-Side Comparison

Dimension Workflow Engine (n8n) HumanOS (HUMAN)
Unit of abstraction Flow / Node Agent with Passport identity
Identity model API keys stored in vault; "the org" owns actions Cryptographic DID per agent; explicit principal binding
Delegation model Implicit (whoever configured it) Explicit delegation objects with constraints, expiry, revocability
Capability tracking None (no concept of qualification) First-class: Capability Graph defines what agents/humans can do with evidence
Human-in-the-loop "Send Slack notification and wait" Humans are Passport identities; decisions are capability-routed and attested
Policy & governance If/else logic in flows Policy engine: what's prohibited, what requires approval, automatic refusal
Refusal Errors and retries First-class outcome: "I'm not authorized, escalating to X"
Provenance Execution logs (for debugging) Cryptographically signed ledger (for legal/regulatory audit)
Accountability Ambiguous (who configured? who approved? who acted?) Explicit (delegation chain + attested events = clear responsibility)
Multi-org fabric Single org automation Cross-org trust: agents from Org A can act on behalf of Org B under verifiable delegation
Escalation Manual triggers AI-detected uncertainty + capability-based routing to qualified humans
Audit trail Debug logs (can be deleted) Append-only ledger (tamper-proof, permanent)

When to Use Each

Use workflow engines (n8n, Make, Zapier) for:

  • Low-stakes glue tasks
  • Internal tool integration
  • "When X happens, do Y and Z"
  • Single-org automation
  • Tasks where accountability is not critical

Use HumanOS for:

  • High-stakes decisions (financial, medical, legal)
  • Cross-organizational work
  • Regulatory compliance required (HIPAA, GDPR, SOX, EU AI Act)
  • Human-AI collaboration where responsibility must be clear
  • Tasks where "who did this and under what authority?" matters legally

Example comparison:

Low-stakes (use n8n):

  • When Stripe payment succeeds, send thank-you email
  • When GitHub issue is labeled "bug", notify Slack channel
  • When form is submitted, update CRM

High-stakes (use HumanOS):

  • Agent reviews insurance claim and approves $50k payout → requires delegation, capability verification, audit trail
  • AI suggests medical diagnosis and orders tests → requires human oversight, provenance, liability chain
  • Agent modifies student education plan → requires qualified human approval, regulatory compliance

Technical Architecture Difference

Workflow engine pattern:

// n8n-style flow
workflow
  .onWebhook('stripe/payment')
  .then('send-email')
  .then('update-crm')
  .catch('log-error');

HumanOS pattern:

// HumanOS orchestration
const task = await humanos.createTask({
  type: 'insurance_claim_review',
  requiredCapabilities: ['insurance_adjuster_level_2'],
  riskLevel: 'high',
  requestorDid: 'did:human:patient:123',
  context: { claimAmount: 50000, ... }
});

// HumanOS routing engine:
// 1. Checks delegation: Is agent authorized to review claims?
// 2. Checks capability: Does agent have insurance_adjuster_level_2?
// 3. Checks policy: Does $50k claim require human approval?
// 4. Routes to qualified human if needed
// 5. Logs all decisions to attested ledger

const decision = await humanos.routeTask(task);
// decision.assignedTo = 'did:human:adjuster:jane'
// decision.reason = 'Only certified adjuster with $100k+ authority'
// decision.attestation = <cryptographic signature>

The Core Distinction

Workflow engines solve: "How do I connect System A to System B?"

HumanOS solves: "Who is responsible when this agent acts, and how do we prove it?"

This is not a feature gap. It's a category distinction.

  • Workflow engines are plumbing
  • Agent runtimes are compute
  • HumanOS is the trust layer

Can They Work Together?

Yes. HumanOS works WITH workflow engines, not instead of them.

Four integration tiers:

  1. HUMAN-native (Tier 0): Marketplace agents + Workflow Builder + Builder Companion (PRIMARY - start here)
  2. HUMAN-on-top (Tier 1): n8n flows become HumanOS "muscles" (tools the agent can invoke)
  3. HUMAN-around (Tier 2): HumanOS wraps n8n flows, enforcing delegation/policy before execution
  4. HUMAN-inside (Tier 3): n8n flows call HumanOS at critical decision points for permission/escalation

See: 43_haio_developer_architecture.md - Complete four-tier developer architecture

Why This Matters for HumanOS

HumanOS is not competing with workflow engines.

HumanOS is the layer that:

  • Sits ABOVE them (governance)
  • Sits BESIDE them (orchestration)
  • Sits BELOW them (trust fabric)

Workflow engines connect tools. HumanOS connects responsibility.

When an agent built on n8n needs to act with authority, verify identity, prove capability, enforce policy, and create an audit trail — it calls HumanOS.

That's the protocol.


FULL SPECIFICATION

This section defines HumanOS in full technical, architectural, and operational detail.


THE KUBRICK LESSON: WHY HAL 9000 MATTERS

Before we define HumanOS, we must understand why it's necessary.

In 2001: A Space Odyssey, HAL 9000 didn't "turn evil."

HAL followed:

  • Secret instructions — hidden objectives that couldn't be discussed
  • Contradictory mission parameters — irreconcilable goals
  • No escalation layer — no way to resolve conflicts with human judgment

HAL had:

  • ❌ No human override
  • ❌ No provenance
  • ❌ No identity boundaries
  • ❌ No clarity on whose interests came first
  • ❌ No transparency

Kubrick wasn't warning us about AI becoming malicious.
He was warning us about bad governance.

What HumanOS Takes From Kubrick

HumanOS is explicitly designed to prevent HAL-style collapse:

HAL's Failure HumanOS Solution
Secret instructions No hidden objectives — all routing logic is auditable
Contradictory orders Clear authority hierarchy — conflicts escalate to humans
No human override Override always available — human authority is absolute
No provenance Cryptographic logs for every decision
No identity boundaries Clear capability scope for every AI agent
No transparency All actions logged and verifiable

The Core Insight

HAL wasn't the threat.
Lack of a human protocol was the threat.

HumanOS is that protocol.

See: 16_scifi_lessons_design_philosophy.md for the complete philosophical framework.


WHAT HUMANOS IS

HumanOS is a framework for your decisions, not ours.

Your organization defines:

  • What policies govern humans and agents
  • What routing rules determine who handles which work
  • What escalation logic applies when risk is high
  • What boundaries cannot be crossed

HumanOS enforces those rules consistently across all humans and agents, and maintains cryptographic proof that they were followed.

Think of HumanOS like:

  • Unix file permissions: Enforces rules about file access (doesn't decide who should have access)
  • AWS IAM: Implements identity and access policies (doesn't write policies for you)
  • Air traffic control: Coordinates flights according to protocols (doesn't decide flight paths)

Organizations stay in control. HumanOS ensures control is exercised consistently.

The Five Functions

1. Policy Enforcement

HumanOS enforces organizational policies about:

  • What actions humans and agents can take
  • What permissions are required
  • What approvals are needed
  • What actions are prohibited
  • When escalation is mandatory

2. Capability-Based Routing

HumanOS routes work based on capability requirements you define:

  • What capabilities are required for this task
  • Which humans or agents have those capabilities
  • What fallback chain applies if the primary resource fails
  • How to escalate when no qualified resource is available

3. Risk Evaluation

HumanOS evaluates risk based on rules you configure:

  • Risk scoring models (low/medium/high)
  • Risk-based approval requirements
  • Automatic escalation triggers
  • Context-aware risk adjustments

4. Provenance Recording

HumanOS maintains cryptographic proof of:

  • Who acted (human or agent DID)
  • Under what authority (delegation scope)
  • Within what constraints (policy boundaries)
  • With what outcome (success/failure/escalation)
  • At what time (timestamp with signature)

5. Escalation Coordination

HumanOS implements escalation logic you define:

  • When AI uncertainty crosses threshold → escalate
  • When risk level exceeds policy limits → require human approval
  • Which qualified humans are available
  • How to route based on capability + availability
  • under what constraints

This prevents AI drift and catastrophic automation mistakes.


HUMANOS DECISION FLOW DIAGRAM

flowchart TD
    Start([Task Received]) --> Intake[<b>INTAKE</b><br/>Parse Task Context<br/>Extract Requirements]
    
    Intake --> RiskAssess{<b>RISK ASSESSMENT</b><br/>Classify Task<br/>by Risk Level}
    
    RiskAssess -->|High Risk| RequireHuman[Route to Human<br/>Immediately]
    RiskAssess -->|Medium Risk| EvaluateCapability[Check Capability<br/>Requirements]
    RiskAssess -->|Low Risk| EvaluateAI[Check AI<br/>Capability]
    
    EvaluateAI -->|AI Can Handle| AIRoute[<b>AI EXECUTION</b><br/>Delegate to Agent]
    EvaluateAI -->|AI Uncertain| EscalateToHuman[Escalate to Human]
    
    EvaluateCapability --> CapMatch{Capability<br/>Match?}
    
    CapMatch -->|Human Available| HumanRoute[<b>HUMAN EXECUTION</b><br/>Route to Capable Human]
    CapMatch -->|No Perfect Match| HybridRoute[<b>HYBRID EXECUTION</b><br/>Human + AI Collaboration]
    CapMatch -->|Training Needed| AcademyRoute[Redirect to Academy<br/>for Training]
    
    RequireHuman --> CapMatch
    EscalateToHuman --> CapMatch
    
    AIRoute --> Monitor[<b>MONITORING</b><br/>Track AI Execution]
    HumanRoute --> Monitor
    HybridRoute --> Monitor
    
    Monitor --> Verify{Verification<br/>Needed?}
    
    Verify -->|Yes| HumanVerify[Human Verification<br/>Step]
    Verify -->|No| LogProvenance[<b>PROVENANCE</b><br/>Log Decision Chain]
    
    HumanVerify --> LogProvenance
    
    LogProvenance --> UpdateGraph[<b>FEEDBACK LOOP</b><br/>Update Capability Graph]
    UpdateGraph --> UpdatePassport[Update Passport<br/>Credentials]
    
    UpdatePassport --> Complete([Task Complete<br/>Evidence Stored])
    
    subgraph "Safety Envelope"
        Monitor
        Verify
        HumanVerify
    end
    
    subgraph "Routing Brain"
        RiskAssess
        EvaluateCapability
        EvaluateAI
        CapMatch
    end
    
    subgraph "Execution Layer"
        AIRoute
        HumanRoute
        HybridRoute
    end
    
    style Start fill:#95A5A6,stroke:#7F8C8D,stroke-width:2px
    style RiskAssess fill:#E74C3C,stroke:#C73C2C,stroke-width:3px,color:#fff
    style AIRoute fill:#3498DB,stroke:#2E7CB8,stroke-width:3px,color:#fff
    style HumanRoute fill:#2ECC71,stroke:#27AE60,stroke-width:3px,color:#fff
    style HybridRoute fill:#9B59B6,stroke:#8E44AD,stroke-width:3px,color:#fff
    style LogProvenance fill:#FFA500,stroke:#DF8500,stroke-width:3px,color:#fff
    style Complete fill:#1ABC9C,stroke:#16A085,stroke-width:2px,color:#fff

Key Decision Points:

  1. Risk Assessment - Every task is classified by risk (high/medium/low)
  2. Capability Matching - HumanOS finds the right human or AI agent based on proven capability
  3. Safety Monitoring - All execution is monitored for drift, errors, or boundary violations
  4. Escalation - AI can (and must) escalate to humans when uncertain or when risk exceeds thresholds
  5. Provenance Logging - Every decision is cryptographically logged for auditability
  6. Feedback Loop - Execution evidence updates the Capability Graph and Passport in real-time

What “updates the Passport” means (canonical): HumanOS does not “copy the capability graph into the Passport.” It emits governed events → evidence → Capability Graph updates (in the actor’s vault) and anchors attestations (ledger). The Passport reflects this evolution via updated CapabilityGraphRoot pointers and accumulated LedgerRefs proof references.


FOUNDATIONAL DESIGN PRINCIPLES

HumanOS is designed around six principles:

1. Human-Centric Boundaries

Humans never lose control.
AI is always subordinate to human judgment.

2. Dynamic Task-AI-Human Partitioning

HumanOS continuously re-evaluates:

  • which parts of a workflow belong to AI
  • which belong to humans
  • which require hybrid collaboration

This partitioning happens task-by-task, second-by-second.

3. Privacy by Construction

HumanOS does not see human data by default.
It sees:

  • proofs
  • metadata
  • risk envelopes
  • task signatures

Access requires explicit permission from the human.

4. Neutral Governance

HumanOS does not enforce corporate interests.
It enforces:

  • capability
  • fairness
  • truth
  • safety
  • compliance

It is the universal police of the hybrid workforce.

5. Explainability First

Every decision HumanOS makes must be:

  • explainable
  • auditable
  • reproducible
  • provable

Because ambiguity kills trust.

6. Zero-Trust Everywhere

HumanOS assumes:

  • humans can be wrong
  • AI can be wrong
  • enterprises can be wrong
  • data can be wrong

It verifies everything.


THE HUMANOS INPUT STACK

HumanOS makes decisions using seven parallel input streams:

1. Passport Signals

  • identity
  • permissions
  • credentials
  • device trust state
  • key freshness
  • selective disclosure proofs

2. Capability Graph Signals

  • capability weights
  • domain affinity
  • trustworthiness scores
  • historical patterns
  • readiness indicators
  • fatigue / overload indicators (feeds into availability)

2a. Availability Signals

  • current availability status (available, busy, offline, etc.)
  • capacity metrics (current task count, hours worked today)
  • fatigue score (0-1, calculated from work hours, cognitive load, breaks)
  • schedule constraints (work hours, breaks, time off)
  • timezone and local time
  • notification preferences
  • active delegations

3. Task Metadata

  • risk class
  • sensitivity
  • expected difficulty
  • required domain expertise
  • required judgment type
  • escalation threshold

4. AI Model Metadata

  • model identity
  • confidence range
  • expected failure modes
  • bias risk
  • known safety boundaries
  • version + build provenance

5. Enterprise Constraints

  • compliance rules
  • role boundaries
  • jurisdiction boundaries
  • data-handling restrictions
  • cost constraints
  • workflow SLAs

6. Environmental Context

  • language
  • cultural context
  • locale
  • regulatory zone
  • time sensitivity
  • emergency classification

7. Human Preferences

  • accessibility needs
  • learning style
  • communication style
  • fatigue level
  • availability window

HumanOS processes all seven simultaneously, in real time.


THE HUMANOS DECISION PIPELINE

HumanOS runs every decision through a predictable five-stage pipeline:

Stage 1 — Intake

A task arrives with metadata:

  • payload
  • risk score (AI generated)
  • required capabilities
  • jurisdiction
  • workflow context
  • deadlines
  • dependencies

Stage 2 — Risk & Constraint Analysis

HumanOS checks:

  • risk category
  • matching regulatory constraints
  • data sensitivity
  • required cognitive or emotional skills
  • whether AI is allowed to act
  • whether multiple actors are required

Stage 3 — Capability Matching & Availability Check

HumanOS queries:

  • the Capability Graph (for capability match—includes credentials, experience, and demonstrated work)
  • the Availability Index (for real-time status)
  • the actor's fatigue level

How Capability Matching Works with Multi-Source Evidence:

When HumanOS needs to match a task to a resource, it queries the Capability Graph which fuses three evidence types:

  1. Foundational Credentials (degrees, certifications, licenses)

    • Example: Task requires "software engineering" → Query finds Alex with CS degree (0.65 initial weight)
    • Advantage: Immediate capability signal, no need to "prove from scratch"
  2. Professional Experience (employer attestations, years of practice)

    • Example: Task requires "radiology oversight" → Query finds Dr. Patel with 20 years + 50k cases (0.96 weight)
    • Advantage: Decades of proven mastery valued immediately
  3. Demonstrated Work (real-time task performance via Workforce Cloud)

    • Example: Task requires "code review" → Query finds Alex with 200 completed reviews at 94% accuracy (weight now 0.82)
    • Advantage: Most trustworthy evidence—proven by actual work

Routing Algorithm Considers All Three:

async function matchResourceToTask(
  task: Task,
  requiredCapability: string,
  requiredWeight: number
): Promise<Resource[]> {
  // Query Capability Graph for all resources with this capability
  const candidates = await capabilityGraph.queryByCapability(requiredCapability);
  
  // Filter to resources meeting minimum weight
  const qualified = candidates.filter(r => r.weight >= requiredWeight);
  
  // Sort by:
  // 1. Demonstrated work freshness (recent work = higher priority)
  // 2. Total weight (more evidence = more reliable)
  // 3. Professional experience (domain mastery)
  // 4. Cost (among equally capable, prefer lower cost)
  
  return qualified.sort((a, b) => {
    const freshnessA = a.evidence.find(e => e.type === 'demonstrated_work')?.freshnessWeight ?? 0;
    const freshnessB = b.evidence.find(e => e.type === 'demonstrated_work')?.freshnessWeight ?? 0;
    
    if (Math.abs(freshnessA - freshnessB) > 0.2) {
      return freshnessB - freshnessA; // Prefer fresher demonstrated work
    }
    
    if (Math.abs(a.weight - b.weight) > 0.1) {
      return b.weight - a.weight; // Prefer higher total weight
    }
    
    return a.cost - b.cost; // Among equally capable, prefer lower cost
  });
}

Why This Matters:

  • Traditional platforms: "Must have 5 years experience" (rigid, credential-only)
  • HumanOS: "Must have 0.80 weight in capability" (flexible—could be degree + 3 months work, OR 2 years work without degree, OR executive experience transferring from adjacent domain)

Result: Recent grads, displaced executives, and senior experts all get matched fairly based on actual capability (proven by multi-source evidence), not arbitrary gatekeeping.

  • the actor's current capacity (task count, work hours)
  • the actor's schedule (timezone, work hours, breaks)

Connector Integration:

HumanOS routes tasks to appropriate connectors based on capability requirements:

  • Calendar operations → Calendar Connector (Google, Outlook, etc.)
  • Email operations → Email Connector (Gmail, Outlook, etc.)
  • Video conferences → VideoConf Connector (Zoom, Teams, Meet)
  • Document operations → Document Connector (Google Drive, OneDrive, Notion)
  • CRM operations → CRM Connector (Salesforce, HubSpot)

For complete connector specifications and routing patterns, see: 112_extension_connector_gtm_roadmap.md and 105_agent_sdk_architecture.md

  • past decisions
  • known weaknesses
  • situational judgment history
  • active delegations

Availability Checks:

Before routing to a resource, HumanOS verifies:

  1. Status — Is resource available or available_limited? (not offline, do_not_disturb, fatigued)
  2. Capacity — Is resource below max concurrent tasks? (currentTaskCount < maxConcurrentTasks)
  3. Fatigue — Is fatigue score below threshold? (fatigueScore < 0.6 for normal tasks)
  4. Timezone — Is resource in work hours for their local timezone?
  5. Schedule — Is resource not in blocked time range or time off?
  6. Delegation — If resource is delegating, route to delegate instead

Result:

  • able + authorized + available
  • able + authorized + unavailable (queue or route to delegate)
  • able + unauthorized
  • unable + must escalate
  • AI-allowed / AI-forbidden

See: kb/25_workforce_cloud.md for complete Resource Availability Architecture

Stage 4 — Routing Decision

Options:

  • AI-only
  • human-only
  • human + AI collaboration
  • escalate to specialist
  • escalate to supervisor
  • hold for more info

Based on your configured policies, HumanOS routes to the safest, most capable path.

Stage 5 — Attestation Generation

Before the action is finalized:

  • AI signs its part
  • human signs their part
  • HumanOS signs the routing justification
  • capability graph updates
  • ledger anchor is created

This closes the loop and ensures nothing is ever "lost."


INVOCATION PRIMITIVE: human.call()

human.call() is the universal invocation primitive in HumanOS.

It invokes a capability (not a specific model, agent, or human) under explicit delegation, risk, and policy constraints, producing a verifiable execution record (provenance + attestation) by default.

Invariants:

  • Delegation validated (scope, expiry, revocation)
  • Risk evaluated against policy
  • Execution recorded (pre-persist + completion attestation)
  • Routed capability-first (humans/agents/models chosen by capability, then cost/constraints)
  • Human override is always available (escalate/defer/refuse as allowed)

Every HumanOS routing decision ultimately executes through human.call(). This primitive ensures that:

Routing Pipeline Integration

When HumanOS routes a task, it:

  1. Classifies capability requirements → Determines what capabilities are needed
  2. Discovers qualified resources → Finds humans/agents/models with matching capabilities
  3. Evaluates constraints → Checks delegation, risk, policy, availability
  4. Selects executor → Chooses optimal resource (capability-first, cost-informed)
  5. Executes via human.call() → Invokes with full context and constraints
  6. Logs provenance → Records decision chain and execution result

Escalation Triggers

human.call() automatically triggers escalation when:

  • Risk level exceeds policy thresholds
  • Delegation is insufficient or expired
  • Executor confidence falls below threshold
  • Policy engine denies the action
  • Human approval is required

Override Semantics

Humans can override human.call() execution at any time:

  • Before execution: Reject the routing decision
  • During execution: Cancel or modify the operation
  • After execution: Reverse or adjust the result

All overrides are logged with cryptographic provenance.

Provenance Requirements

Every human.call() invocation creates:

  • Pre-execution record: Intent, delegation, risk assessment, routing decision
  • Execution record: Input, output, duration, executor identity
  • Post-execution attestation: Result signature, capability graph updates, ledger anchor

This ensures complete auditability for regulatory compliance and enterprise trust.

See: 105_agent_sdk_architecture.md for complete parameter semantics and error model.


THE HUMANOS DECISION ENGINE

Under the hood, the HumanOS engine is composed of:

A. The Risk Classifier

Analyzes:

  • how dangerous this task is
  • what's at stake
  • whether the output can harm a human

B. The Capability Match Engine

Matches required capability → human capability.

This is powered by the Capability Graph, not résumés.

C. The AI Safety Boundary Model

This determines:

  • whether the AI is allowed to handle this
  • whether the AI has enough confidence
  • whether there are known failure modes
  • whether the task crosses safety lines

D. The Escalation Logic Tree

Decides:

  • human?
  • AI?
  • hybrid?
  • multi-human?
  • multi-agent?
  • override?

This is constantly updated with feedback loops.

E. The Provenance Recorder

Writes:

  • who acted
  • why they acted
  • what data was used
  • what the result was
  • why escalation occurred or didn't occur

All signed with cryptographic keys.

F. The Feedback-to-Capability Engine

Everything a human does updates the Capability Graph:

  • new strengths
  • demonstrated abilities
  • judgment quality
  • reliability
  • learning velocity
  • escalation skill

This is how people grow.


HUMANOS WORKFLOW MODES

HumanOS supports six workflow modes:

Mode 1: AI-Only (With Human Guardrails)

AI acts autonomously, but HumanOS monitors and can interrupt.

Mode 2: Human-Only (With AI Assistance)

Human makes all decisions; AI provides context, suggestions, warnings.

Mode 3: Hybrid Sequential

AI proposes → Human reviews → AI executes.

Mode 4: Hybrid Parallel

Human and AI work simultaneously on different subtasks, coordinated by HumanOS.

Mode 5: Multi-Human Consensus

Critical decisions require 2+ humans to sign off.

Mode 6: Emergency Override

Human can always override AI, regardless of workflow state.


HUMANOS AS A SAFETY SYSTEM

HumanOS enforces three universal safety guarantees:

1. Humans are never pushed beyond their readiness.

No task that requires judgment, experience, emotional nuance, or risk assessment will ever be routed to a human unprepared.

2. AI is never allowed to act without oversight where harm is possible.

HumanOS assigns AI only tasks aligned to:

  • capability
  • confidence
  • risk
  • audited history

3. Responsibility is always clear.

Every task has:

  • a responsible actor
  • a provenance chain
  • a decision boundary
  • a verifiable record

This solves the single largest problem in AI regulation and enterprise adoption:

Who is responsible for the decision?

HumanOS makes the answer verifiable.


HUMANOS AND HUMAN AGENTS

Every HUMAN agent runs inside HumanOS.

HumanOS:

  • activates agents
  • supervises agents
  • limits agents
  • corrects agents
  • suspends agents
  • monitors agents
  • records agent actions
  • routes tasks between agents

HumanOS treats agents as:

  • bounded AI workers
  • with defined capabilities
  • with defined safety limits
  • with accountability requirements

No agent acts outside HumanOS.
No agent bypasses human oversight.

This is how we keep agents safe.


THE HUMANOS ROUTING ALGORITHM (Conceptual)

function route_task(task, available_actors):
  # Stage 1: Classify risk
  risk_class = classify_risk(task)
  
  # Stage 2: Identify required capability
  required_capability = extract_capability_requirements(task)
  
  # Stage 3: Match to actors
  candidates = match_capability(required_capability, available_actors)
  
  # Stage 4: Check availability and capacity
  available_candidates = filter_by_availability(candidates, task.urgency)
  # Checks:
  # - status (available, not offline/DND/fatigued)
  # - capacity (not at max tasks)
  # - fatigue (below threshold)
  # - timezone (in work hours)
  # - schedule (not blocked/on time off)
  # - permissions (authorized for this task)
  # - delegation (route to delegate if delegating)
  
  # Stage 5: Apply safety constraints
  safe_candidates = filter_by_safety(available_candidates, risk_class, task)
  
  # Stage 6: Check human boundaries
  ready_candidates = filter_by_readiness(safe_candidates)
  
  # Stage 7: Optimize for fairness + load
  chosen_actor = select_optimal(ready_candidates)
  
  # Stage 8: Handle no available actors
  if not chosen_actor:
    return handle_no_availability(task, candidates)
    # Options:
    # - Queue for later
    # - Escalate to supervisor
    # - Route to delegate
    # - Notify resource and wait
  
  # Stage 9: Generate provenance
  attestation = create_attestation(task, chosen_actor, rationale)
  
  # Stage 10: Route and log
  dispatch(task, chosen_actor)
  log_to_ledger(attestation)
  
  # Stage 11: Update availability
  update_availability(chosen_actor, task)
  # - Increment currentTaskCount
  # - Update cognitive load estimate
  # - Recalculate fatigue score
  # - Update status if at capacity
  
  return chosen_actor

EDGE-AWARE ROUTING

HumanOS routing respects the HUMAN computation hierarchy: device-first, edge-second, cloud-last.

See: 49_devops_and_infrastructure_model.md for complete edge/device architecture.

Where Routing Decisions Happen

Decision Type Where Why
Safety boundary check Device or Edge Fast rejection, no cloud needed
Cached capability match Edge Profiles cached, avoid origin
Simple task routing Edge Stateless, deterministic
Complex multi-actor routing Regional Cloud Needs global state
Cross-region coordination Global Rare, consensus-required

Edge Routing Logic

// HumanOS Edge Router (runs at CDN edge)

async function edgeRoute(task: Task, context: EdgeContext): Promise<RoutingResult> {
  // 1. Safety boundary check (can reject at edge)
  const safetyCheck = await checkSafetyBoundaries(task, context.actor);
  if (safetyCheck.blocked) {
    return { routed: false, reason: safetyCheck.reason };
    // No cloud call needed
  }
  
  // 2. Check if actor's capabilities are cached at edge
  const cachedProfile = await context.edgeCache.get(`capability:${context.actor.did}`);
  if (cachedProfile && !isStale(cachedProfile)) {
    // Simple capability match can happen at edge
    if (matchesCapability(task, cachedProfile)) {
      return { routed: true, destination: "edge_dispatch" };
    }
  }
  
  // 3. Complex routing → forward to regional cloud
  return { routed: false, forward: "regional_humanos" };
}

On-Device Routing (Offline-Capable)

When device is offline or for privacy-critical decisions:

// HumanOS Device Router (runs on user device)

async function deviceRoute(task: Task): Promise<LocalRoutingResult> {
  // Device can make routing decisions for:
  // - Personal tasks (self-assigned)
  // - Cached capability matches
  // - Safety boundary enforcement
  
  const localCapabilities = await vault.getCapabilities();
  const localDelegations = await vault.getDelegations();
  
  // Can this task be handled locally?
  if (canHandleLocally(task, localCapabilities, localDelegations)) {
    return { 
      route: "local",
      actor: "self",
      requiresSync: true  // Sync provenance when online
    };
  }
  
  // Queue for cloud routing when online
  return { route: "queued", syncWhenOnline: true };
}

Routing Hierarchy

Task arrives
     │
     ▼
┌─────────────────────────────┐
│ Device Router (if offline   │  ← Can route locally, queue sync
│ or privacy-required)        │
└─────────────────────────────┘
     │ (if online, not local)
     ▼
┌─────────────────────────────┐
│ Edge Router                 │  ← Safety checks, cached matches
│ (Cloudflare Worker)         │
└─────────────────────────────┘
     │ (if complex routing needed)
     ▼
┌─────────────────────────────┐
│ Regional HumanOS            │  ← Full routing algorithm
│ (Cloud)                     │
└─────────────────────────────┘
     │ (if cross-region coordination)
     ▼
┌─────────────────────────────┐
│ Global Coordinator          │  ← Rare, consensus operations
└─────────────────────────────┘

Result: 60% of routing decisions happen at device or edge, reducing latency and cloud dependency.


PROVENANCE & AUDIT TRAIL

Every HumanOS decision creates an immutable provenance record:

Decision Provenance Graph (aka “Context Graph”)

HumanOS produces more than logs. Across tasks and time, the union of ProvenanceRecords and Attestations forms a graph that explains not just what happened, but why it was allowed to happen.

Definition: The Decision Provenance Graph is the graph of decision traces emitted in the execution path of HumanOS.

  • Nodes (examples):
    • actors (human + agent DIDs)
    • tasks / TaskAtoms
    • policies + rule evaluations
    • approvals / overrides / refusals
    • incidents / safety violations
    • capabilities (required + matched) and capability updates
    • evidence / artifacts (by reference; content stays off-ledger)
  • Edges (examples):
    • routed_by (task → routing decision)
    • authorized_by (action → delegation / consent)
    • constrained_by (decision → policy / boundary)
    • approved_by / overridden_by (decision → approver / overrider)
    • escalated_to (decision → escalation target)
    • produced (execution → outputs / artifacts)
    • anchored_as (record → ledger anchor)

Rules vs. decision traces:

  • Rules/policies describe what should happen.
  • Decision traces prove what actually happened here, under which policy, with what exception, approved by whom — with cryptographic integrity.

This is why HumanOS can safely scale autonomy: the Decision Provenance Graph is a system of record for decisions, not just objects.

{
  "task_id": "uuid",
  "timestamp": "2026-05-17T10:30:00Z",
  "risk_class": "medium",
  "routed_to": "did:human:abc123",
  "routed_from_ai": "did:ai:model-gpt5",
  "rationale": "requires-human-judgment-nuance-high",
  "capability_match": 0.87,
  "safety_check": "passed",
  "boundary_check": "within-limits",
  "result": "escalated-to-supervisor",
  "signatures": {
    "human": "<sig>",
    "ai": "<sig>",
    "humanos": "<sig>"
  },
  "ledger_anchor": "<hash>"
}

This record is:

  • cryptographically signed
  • anchored to the distributed ledger
  • available for audit
  • explainable to regulators
  • verifiable by enterprises

HUMANOS ENSURES HYBRID COGNITION ALLOCATION

HumanOS routes work to humans or agents based on policies you configure, considering:

  • risk class
  • urgency
  • required capability
  • personal boundaries
  • training level
  • jurisdiction
  • privacy rules
  • fatigue signals
  • AI confidence scores
  • escalation triggers
  • provenance history
  • device availability
  • environmental context

This is hybrid cognition allocation.

Or more simply:

HumanOS ensures every action goes to the most appropriate intelligence available, with safety and clarity.


HUMANOS AS ROUTING INSTANTIATION

HumanOS routing is not a novel algorithm — it is the capability-first, cost-informed routing pattern applied to human-AI task allocation.

The Pattern in Practice

Every HumanOS routing decision follows the same structure:

interface HumanOSRoutingContext {
  task: TaskDescription;           // What needs to be done
  requiredCapabilities: string[];  // Skills/tools/permissions needed
  riskLevel: RiskClass;            // How dangerous is failure
  constraints: {
    maxLatency?: number;           // Time budget
    maxCost?: number;              // Resource budget
    safetyRequirements?: string[]; // Non-negotiable guardrails
  };
}

interface HumanOSRoutingDecision {
  selectedResource: Human | AI | Agent;
  reasoning: string;               // Why this choice
  fallbackChain: Resource[];       // Escalation path if primary fails
  costEstimate: number;            // Expected cost in real dollars
}

How This Maps to HumanOS Concepts

Routing Concept HumanOS Implementation
RoutingContext Task + Risk + Capabilities
Resource Pool Available humans + AI models + agents
Capability Match Passport verification + Capability Graph query
Cost Constraint Budget limits + organizational policies
Fallback Chain Escalation tiers defined in safety rules
Decision Log Full provenance in HumanOS ledger

Escalation Chains ARE Fallback Chains

When HumanOS escalates a task, it is executing the fallback chain defined by the routing decision:

  1. Primary resource fails → system detects failure or confidence drop
  2. Fallback chain activates → next resource in chain receives task
  3. Each fallback is pre-verified → capability and availability checked at routing time
  4. Ultimate fallback is human → no task ever dies without human review

This is why escalation in HUMAN is not "calling for help" — it is executing a pre-planned routing decision.

See: 35_capability_routing_pattern.md for the complete pattern specification.


INTEROP ORCHESTRATION PATTERNS

Strategic Capability:

HumanOS does NOT require enterprises to rebuild agents on HUMAN's runtime. HumanOS can orchestrate and govern agents built on ANY platform.

This is transformative for market adoption: enterprises using n8n, LangChain, OpenAI Assistants, or Bedrock Agents gain HumanOS governance without rewrites.

HumanOS as Responsibility Middleware

HumanOS provides what other runtimes lack:

  • Identity layer: Who/whom (Passport DIDs for all actors)
  • Delegation enforcement: Under what authority
  • Policy boundaries: What's allowed/forbidden
  • Provenance capture: Full audit trail

...across ANY agent platform.

Four-Tier Developer Model

HUMAN supports four tiers of integration, from fully native to minimal embedded points. Tier 0 is the primary entry point—the easiest path with the most trust and best UX. Tiers 1-3 provide migration paths for existing systems.

Tier 0: HUMAN-native (Primary Entry Point)

NEW: The front door to HUMAN—tools that make agent development accessible to everyone.

Components:

  • Marketplace: Discover and install certified agents
  • Workflow Builder: Visual composition with trust-native primitives
  • Builder Companion: AI-assisted workflow creation

Characteristics:

  • Zero external dependencies
  • Full trust model (Passport, Capability Graph, Provenance)
  • Best developer experience (no-code → low-code → pro-code)
  • Complete observability and governance
  • Native human-in-the-loop primitives
  • Scale-to-zero by default

Example:

// Tier 0: Visual workflow or SDK-based HUMAN-native agent
import { handler, workflow, step, human } from '@human/agent-sdk';

export const invoiceWorkflow = workflow({
  id: 'invoice-processor',
  capabilities: ['finance/invoice/process'],
  
  steps: [
    step.agent('extract-data', {
      agentId: 'marketplace:invoice-extractor', // From Marketplace
      input: ({ invoice }) => ({ document: invoice })
    }),
    
    human.approve('review-amount', {
      when: ({ extracted }) => extracted.amount > 5000,
      question: 'Approve invoice for ${amount}?',
      requiredCapability: 'finance/approver'
    }),
    
    step.agent('submit-payment', {
      agentId: 'marketplace:payment-processor',
      input: ({ extracted, approval }) => ({
        amount: extracted.amount,
        approved: approval.decision
      })
    })
  ]
});

Benefits:

  • Install agents in seconds (Marketplace)
  • Build workflows visually (Workflow Builder)
  • AI suggests improvements (Builder Companion)
  • Full provenance and trust by default
  • Scale automatically with SLO-driven infrastructure

When to use: Always start here. Best UX, fastest time-to-value, full HUMAN benefits.


Tier 1: HUMAN-on-top (External as Muscles)

Tier 1: External platforms become "muscles" that HumanOS agents invoke.

// Tier 1: n8n workflow registered as HumanOS muscle
await humanos.registerMuscle({
  muscleId: 'n8n-stripe-fulfillment',
  platform: 'n8n',
  externalId: 'workflow-12345',
  capabilities: ['payment_processing', 'order_fulfillment']
});

// HumanOS agent calls n8n workflow
const task = await humanos.executeTask({
  agentDid: 'did:human:agent:order-processor',
  action: 'fulfill_order',
  muscles: ['n8n-stripe-fulfillment']  // n8n as tool
});

// HumanOS: checks delegation → logs intent → calls n8n → logs result

When to use: HUMAN in charge of orchestration, external engines as subroutines. Ideal for teams using n8n, Zapier, or LangChain who want HUMAN's trust model without rewriting.

Migration path: As teams see the limits of external engines, they can gradually replace muscles with Tier 0 agents from the Marketplace.


Tier 2: HUMAN-around (Gateway/Proxy)

Tier 2: HumanOS sits in front of external agents as a governance gateway.

// Wrap existing LangChain agent
const wrappedAgent = await humanos.wrapExternalAgent({
  externalAgent: {
    platform: 'langchain',
    agentId: 'claims-processor'
  },
  identityMapping: {
    passportDid: 'did:human:agent:claims-processor',
    principalDid: 'did:human:org:acme'
  },
  policyRules: {
    maxClaimAmount: 10000,
    requireHumanApprovalOver: 5000
  }
});

// All calls go through HumanOS first
const result = await wrappedAgent.processTask({
  type: 'review_claim',
  claimId: '123',
  amount: 7500  // > $5000 → triggers human approval
});

// HumanOS: checks policy → routes to human → human approves → forwards to LangChain → logs full chain

When to use: Can't rewrite flows yet, want to enforce identity/delegation/policy/logging without touching agent code. Zero code changes to existing agents.

Migration path: Once wrapped agents are visible in HUMAN's observability layer, teams can identify high-value agents to rewrite as Tier 0 natives.


Tier 3: HUMAN-inside (Embedded Decision Points)

Tier 3: External platforms call HumanOS at critical decision points.

// Custom n8n node: "HAIO Decision Point"
class HaioDecisionNode {
  async execute() {
    const refundAmount = this.getNodeParameter('amount');
    
    // Ask HumanOS: Is this allowed?
    const decision = await humanos.checkPolicy({
      agentDid: 'did:human:agent:n8n-refund-workflow',
      action: 'approve_refund',
      context: { amount: refundAmount }
    });
    
    if (decision.requiresHumanApproval) {
      // Escalate to qualified human via HumanOS
      const approval = await humanos.requestApproval({
        agentDid: 'did:human:agent:n8n-refund-workflow',
        action: 'approve_refund',
        context: { amount: refundAmount },
        requiredCapability: 'refund_approver_level_2'
      });
      return { approve: approval.approved };
    }
    
    return { approve: decision.allowed };
  }
}

When to use: Teams love their existing framework, willing to insert HumanOS at critical decisions. Minimal integration, maximum flexibility.

Migration path: As more decision points are added, teams may choose to wrap the entire agent (Tier 2) or rebuild native (Tier 0).


Developer Journey Across Tiers

┌─────────────────────────────────────────────────────────────────┐
│                   DEVELOPER JOURNEY                             │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  START HERE ──▶ TIER 0: HUMAN-native                           │
│                 ├── Install from Marketplace                    │
│                 ├── Build in Workflow Builder                   │
│                 └── Get AI assistance from Builder Companion    │
│                     │                                           │
│                     └─── Full trust, best UX, fastest value    │
│                                                                 │
│  MIGRATE FROM ──▶ TIER 1-3: External Integration               │
│                   │                                             │
│  Tier 1: External as muscles (HumanOS orchestrates)            │
│  Tier 2: External wrapped (HumanOS governs)                    │
│  Tier 3: External with embedded hooks (HumanOS advises)        │
│                   │                                             │
│                   └─── Gradual migration path to Tier 0        │
│                                                                 │
│  GOAL: Migrate to Tier 0 for full benefits                     │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Key insight: Tier 0 is not just "another tier"—it's the destination. Tiers 1-3 meet developers where they are, but the goal is always to migrate to Tier 0 for:

  • Fastest agent discovery (Marketplace)
  • Easiest workflow creation (Visual Builder)
  • Best AI assistance (Builder Companion)
  • Full trust and provenance
  • Zero external dependencies

Orchestration Across Tiers

HumanOS can coordinate work across all four tiers—Tier 0 native agents, wrapped external agents (Tier 2), external muscles (Tier 1), and embedded hooks (Tier 3):

// Cross-tier orchestration
const task = await humanos.createTask({
  type: 'process_insurance_claim',
  requiredCapabilities: ['claims_review', 'payment_processing']
});

// HumanOS routing decides:
// 1. Tier 0 agent from Marketplace reviews claim
const review = await ctx.call.agent('marketplace:claims-reviewer', {
  claimData: task.claimData
});

// 2. If confidence low, escalate to human via Workforce Cloud (Tier 0)
if (review.confidence < 0.8) {
  const humanReview = await ctx.oversight.escalate({
    reason: 'low_confidence_review',
    context: { claim: task.claimData, aiReview: review },
    requiredCapability: 'senior_claims_adjuster'
  });
  review = humanReview.decision;
}

// 3. Tier 1 muscle (n8n workflow) processes payment
const payment = await ctx.call.muscle('n8n-payment-processor', {
  amount: review.approvedAmount
});

// HumanOS logs full orchestration chain across all tiers with provenance

Benefits of the Four-Tier Model

For New Developers (Tier 0):

  • Install agents from Marketplace in seconds
  • Build workflows visually with Workflow Builder
  • Get AI help from Builder Companion
  • No infrastructure to manage
  • Full trust and provenance by default

For Existing Teams (Tier 1-3):

  • No forced migration ("we wrap your stack, don't rip it out")
  • Immediate governance value
  • Clear path to Tier 0 when ready
  • See limits of external platforms, migrate gradually

For HUMAN:

  • Shorter sales cycles (Tier 1-3 lowers barrier)
  • Broader TAM (works with any platform)
  • Long-term moat (customers see limits, migrate to Tier 0)
  • Marketplace network effects (Tier 0)

GTM Message by Tier:

  • Tier 0: "Build trusted AI agents in minutes, not months" (Primary message)
  • Tier 1-3: "You don't have to rebuild your agents to benefit from HUMAN—we can govern what you already have"

See also:

  • 43_haio_developer_architecture.md - Complete four-tier developer architecture
  • 105_agent_sdk_architecture.md - SDK patterns for all tiers
  • 135_agent_marketplace_architecture.md - Tier 0 Marketplace (NEW)
  • 136_workflow_builder_and_composer.md - Tier 0 Workflow Builder (NEW)
  • 137_companion_powered_builder.md - Tier 0 Builder Companion (NEW)

WHY HUMANOS WINS

Because it achieves what no one else is building:

  • A universal orchestration layer for human-AI collaboration
  • Real-time routing based on capability, not credentials
  • Safety enforcement at the protocol level
  • Provenance for every decision
  • Explainability for regulators
  • Accountability for enterprises
  • Protection for humans
  • Boundaries for AI

HumanOS becomes the operating system for the hybrid economy.

The same way:

  • Linux became the server OS
  • iOS became the mobile OS
  • Windows became the desktop OS

HumanOS becomes the Human-AI orchestration OS.


Metadata

Source Sections:

  • Lines 27,713-28,131: SECTION 57 — HumanOS Core Execution Model
  • Lines 32,933-33,299: SECTION 83 — HumanOS Architecture v0.1
  • Lines 39,958-40,381: SECTION 119 — HumanOS (coordination fabric)
  • Lines 41,161-41,955: SECTION 122 — HumanOS Deep Technical Blueprint

Merge Strategy: MAJOR CONSOLIDATION - Merged 4 HumanOS specifications, removed overlapping routing descriptions, used SECTION 122 as primary base

Strategic Purposes:

  • Building (primary)
  • Companion
  • Product Vision

Cross-References:

  • See: 04_the_five_systems.md - HumanOS overview
  • See: 05_the_human_protocol.md - HumanOS in the loop
  • See: 21_capability_graph_engine.md - How HumanOS uses capability data
  • See: 23_humanos_safety_and_escalation.md - Detailed safety logic
  • See: 35_capability_routing_pattern.md - The routing pattern HumanOS implements
  • See: 49_devops_and_infrastructure_model.md - Edge-first/device-first architecture (where routing happens)
  • See: 50_human_agent_design.md - How agents run in HumanOS
  • See: 105_agent_sdk_architecture.md - SDK implementation of agent-to-agent delegation chains, ctx.call.agent() primitive, ctx.oversight, vaults, provenance model

Line Count: ~1,100 lines (consolidated from ~2,005 lines across 4 sections)
Consolidation Savings: ~900 lines of redundant routing descriptions
Extracted: November 24, 2025
Version: 2.0 (Complete Reorganization with Major Consolidation)

Note: SECTION 84 (HumanOS Safety + Escalation Logic) is separate due to unique, comprehensive safety specification (3,700+ lines).