Skip to main content

Human-in-the-Loop

Overview

Ensure critical decisions require explicit human approval before execution. Human-in-the-Loop (HITL) is HUM's core philosophy: AI assists and proposes, humans decide and approve.

Why Human-in-the-Loop?

  • Safety: Prevent autonomous AI from making irreversible mistakes
  • Accountability: Humans remain responsible for all decisions
  • Trust: Build confidence through transparency and oversight
  • Compliance: Meet regulatory requirements for human oversight
  • Learning: Humans learn from AI proposals, AI learns from human feedback
  • Gradual Autonomy: Start with heavy oversight, reduce as confidence grows

Think of it like: Autopilot on a plane—it can fly the plane, but the pilot is always in command and can take over instantly.

The HITL Philosophy

At HUMΛN, we believe:

  1. AI is a tool, not a decision-maker
  2. Humans must approve irreversible actions
  3. Transparency builds trust - show AI reasoning, not just results
  4. Confidence levels matter - AI must know when to escalate
  5. Gradual delegation - earn autonomy through demonstrated reliability
┌──────────────────────────────────────────────────────────────┐
│  HITL Spectrum: From Full Human Control → Guided Autonomy   │
├──────────────────────────────────────────────────────────────┤
│                                                              │
│  1. Human Does Everything                                    │
│     AI provides no assistance                                │
│     ▓▓▓▓▓▓▓▓▓▓ 100% Human                                   │
│                                                              │
│  2. AI Suggests, Human Decides (Recommended Start)           │
│     AI proposes options, human chooses                       │
│     ▓▓▓▓▓▓▓▓░░ 80% Human, 20% AI                            │
│                                                              │
│  3. AI Decides, Human Approves (HITL Pattern)                │
│     AI makes decisions, human must approve before execution  │
│     ▓▓▓▓░░░░░░ 40% Human, 60% AI                            │
│                                                              │
│  4. AI Decides, Human Spot-Checks                            │
│     AI executes, human reviews audit logs periodically       │
│     ▓▓░░░░░░░░ 20% Human, 80% AI                            │
│                                                              │
│  5. Fully Autonomous (High Confidence Only)                  │
│     AI executes without human approval (non-critical only)   │
│     ░░░░░░░░░░ 5% Human (oversight), 95% AI                 │
│                                                              │
└──────────────────────────────────────────────────────────────┘

Most HUMΛN workflows should operate at Level 3 (AI Decides, Human Approves)

How Human-in-the-Loop Works

HumanOS provides built-in HITL mechanisms at every stage:

1. Task Routing Approval

Human approves which resource gets assigned a task.

2. Action Approval

Human approves individual actions before execution (e.g., "Send this email", "Deploy this code").

3. Confidence-Based Escalation

AI automatically escalates to human when confidence drops below threshold.

4. Human Override

Human can take control at any time during execution.

SDK Examples

typescript:TypeScript
```typescript
import { HumanOS } from '@human/sdk';

async function processInvoiceWithApproval(invoice: Invoice) {
  try {
    // Create task with HITL enabled
    const task = await HumanOS.createTask({
      description: `Process invoice ${invoice.id} for $${invoice.amount}`,
      requiredCapabilities: ['accounting', 'invoice_processing'],
      humanInLoop: {
        enabled: true,
        approvalRequired: 'before_execution', // Require approval before processing
        approverRole: 'finance_manager',
        approvalTimeout: 3600, // 1 hour to approve
        escalationPolicy: 'auto', // Auto-escalate if no response
      },
    });
    
    console.log(`Task created: ${task.id}`);
    console.log(`Awaiting approval from finance manager...`);
    
    // Wait for human approval
    const approval = await HumanOS.waitForApproval(task.id, {
      onApprovalRequest: (request) => {
        console.log(`📋 Approval requested:`);
        console.log(`  Task: ${request.task.description}`);
        console.log(`  AI Recommendation: ${request.aiProposal.action}`);
        console.log(`  AI Reasoning: ${request.aiProposal.reasoning}`);
        console.log(`  AI Confidence: ${request.aiProposal.confidence}`);
        console.log(`  Alternatives: ${request.alternatives.length}`);
      },
    });
    
    if (approval.approved) {
      console.log(`✅ Approved by ${approval.approvedBy}`);
      console.log(`   Feedback: ${approval.feedback || 'None'}`);
      
      // Execute the task
      const result = await HumanOS.executeTask(task.id);
      return result;
    } else {
      console.log(`❌ Rejected by ${approval.rejectedBy}`);
      console.log(`   Reason: ${approval.rejectionReason}`);
      throw new Error(`Task rejected: ${approval.rejectionReason}`);
    }
    
  } catch (error) {
    console.error("HITL processing failed:", error);
    throw error;
  }
}

// Example: Process invoice with human oversight
const invoice = { id: 'INV-2026-001', amount: 5000, vendor: 'Acme Corp' };
processInvoiceWithApproval(invoice);
```

python:Python
```python
from human_sdk import HumanOS
from typing import Optional

async def process_email_with_approval(
    email_draft: dict,
    recipient: str
):
    try:
        # Create task with HITL enabled
        task = await HumanOS.create_task(
            description=f"Send email to {recipient}",
            required_capabilities=['email_composition', 'professional_communication'],
            human_in_loop={
                'enabled': True,
                'approval_required': 'before_execution',
                'approver_role': 'communications_lead',
                'approval_timeout': 1800,  # 30 minutes
                'show_ai_reasoning': True,  # Show why AI generated this email
            }
        )
        
        print(f"Task created: {task.id}")
        print(f"Awaiting approval from communications lead...")
        
        # Wait for human approval
        approval = await HumanOS.wait_for_approval(
            task.id,
            on_approval_request=lambda req: print(
                f"📧 Email draft ready for approval:\n"
                f"  To: {req.email_draft['to']}\n"
                f"  Subject: {req.email_draft['subject']}\n"
                f"  Preview: {req.email_draft['body'][:100]}...\n"
                f"  AI Confidence: {req.ai_proposal.confidence}\n"
            )
        )
        
        if approval.approved:
            print(f"✅ Email approved by {approval.approved_by}")
            
            # Apply any human edits
            if approval.edits:
                email_draft.update(approval.edits)
                print(f"   Human made {len(approval.edits)} edits")
            
            # Send the email
            result = await HumanOS.execute_task(task.id, payload=email_draft)
            return result
        else:
            print(f"❌ Email rejected by {approval.rejected_by}")
            print(f"   Reason: {approval.rejection_reason}")
            return None
            
    except Exception as e:
        print(f"HITL email processing failed: {e}")
        raise

# Example usage
# email = {
#     'to': 'client@example.com',
#     'subject': 'Q4 Performance Review',
#     'body': 'AI-generated email content...'
# }
# await process_email_with_approval(email, 'client@example.com')
```

go:Go
```go
package main

import (
    "fmt"
    "time"
    "github.com/human/sdk-go/humanos"
)

func deployCodeWithApproval(deployConfig humanos.DeploymentConfig) error {
    // Create task with HITL enabled
    task, err := humanos.CreateTask(humanos.CreateTaskRequest{
        Description: fmt.Sprintf("Deploy %s to production", deployConfig.Service),
        RequiredCapabilities: []string{"devops", "kubernetes"},
        HumanInLoop: &humanos.HITLConfig{
            Enabled: true,
            ApprovalRequired: "before_execution",
            ApproverRole: "senior_engineer",
            ApprovalTimeout: 600, // 10 minutes
            RequireExplicitReason: true, // Force approver to provide reason
        },
    })
    if err != nil {
        return fmt.Errorf("failed to create task: %w", err)
    }
    
    fmt.Printf("Task created: %s\n", task.ID)
    fmt.Printf("Awaiting approval from senior engineer...\n")
    
    // Wait for human approval
    approval, err := humanos.WaitForApproval(task.ID, humanos.ApprovalOptions{
        OnApprovalRequest: func(req humanos.ApprovalRequest) {
            fmt.Printf("🚀 Deployment ready for approval:\n")
            fmt.Printf("  Service: %s\n", deployConfig.Service)
            fmt.Printf("  Environment: %s\n", deployConfig.Environment)
            fmt.Printf("  Version: %s\n", deployConfig.Version)
            fmt.Printf("  AI Safety Check: %s\n", req.AISafetyCheck)
            fmt.Printf("  AI Confidence: %.2f\n", req.AIProposal.Confidence)
        },
    })
    if err != nil {
        return fmt.Errorf("approval wait failed: %w", err)
    }
    
    if approval.Approved {
        fmt.Printf("✅ Deployment approved by %s\n", approval.ApprovedBy)
        fmt.Printf("   Approval reason: %s\n", approval.ApprovalReason)
        
        // Execute deployment
        result, err := humanos.ExecuteTask(task.ID)
        if err != nil {
            return fmt.Errorf("deployment failed: %w", err)
        }
        
        fmt.Printf("Deployment completed: %s\n", result.Status)
        return nil
    }
    
    fmt.Printf("❌ Deployment rejected by %s\n", approval.RejectedBy)
    fmt.Printf("   Rejection reason: %s\n", approval.RejectionReason)
    return fmt.Errorf("deployment rejected: %s", approval.RejectionReason)
}
```

REST API Example

POST /v1/humanos/tasks
Content-Type: application/json
Authorization: Bearer <YOUR_API_KEY>

{
  "description": "Delete production database backup older than 90 days",
  "requiredCapabilities": ["database_administration", "backup_management"],
  "humanInLoop": {
    "enabled": true,
    "approvalRequired": "before_execution",
    "approverRole": "dba_lead",
    "approvalTimeout": 1800,
    "escalationPolicy": "auto",
    "showAIReasoning": true,
    "requireExplicitFeedback": true
  },
  "payload": {
    "backupId": "backup-2025-10-01",
    "size": "250GB",
    "lastAccessed": "2025-10-15"
  }
}

Response (201 Created):

{
  "taskId": "task:human:a1b2c3d4e5f6...",
  "status": "awaiting_approval",
  "approvalRequest": {
    "id": "approval:human:x9y8z7w6v5u4...",
    "requestedAt": "2026-01-10T15:00:00Z",
    "expiresAt": "2026-01-10T15:30:00Z",
    "approverRole": "dba_lead",
    "aiProposal": {
      "action": "Delete backup-2025-10-01 (250GB, last accessed 2025-10-15)",
      "reasoning": "Backup is 101 days old, exceeds 90-day retention policy. Disk space usage at 87%.",
      "confidence": 0.92,
      "alternatives": [
        "Archive to cold storage instead of deletion",
        "Extend retention policy to 120 days"
      ]
    },
    "riskAssessment": {
      "level": "medium",
      "factors": ["irreversible_action", "production_data", "significant_size"],
      "mitigations": ["backup_verified_accessible", "recent_restore_test_passed"]
    }
  }
}

Approve the Task:

POST /v1/humanos/approvals/approval:human:x9y8z7w6v5u4.../approve
Content-Type: application/json
Authorization: Bearer <DBA_LEAD_TOKEN>

{
  "approvedBy": "did:human:john-dba",
  "approvalReason": "Verified backup integrity and disk space constraints. Proceeding with deletion.",
  "feedback": "AI assessment was correct. Good analysis of risk factors."
}

Response (200 OK):

{
  "approvalId": "approval:human:x9y8z7w6v5u4...",
  "status": "approved",
  "approvedBy": "did:human:john-dba",
  "approvedAt": "2026-01-10T15:05:00Z",
  "taskStatus": "executing",
  "provenance": {
    "approvalSignature": "0x7a3f9b2c...",
    "ledgerProof": "0x4e8d1a9f..."
  }
}

Confidence-Based Auto-Escalation

HumanOS can automatically escalate to human review when AI confidence drops:

const task = await HumanOS.createTask({
  description: "Analyze customer sentiment from support tickets",
  autoEscalation: {
    enabled: true,
    confidenceThreshold: 0.7, // Escalate if confidence < 70%
    escalateTo: 'customer_success_lead',
    escalationMessage: "AI confidence dropped below threshold - human review needed",
  },
});

// If AI confidence is 0.65 during execution:
// → Task automatically pauses
// → Human is notified
// → Human reviews and decides to continue or override

Approval Workflows

Sequential Approval

Require multiple approvals in sequence:

const task = await HumanOS.createTask({
  description: "Wire transfer $50,000 to vendor",
  approvalChain: [
    { role: 'finance_analyst', reason: 'Verify invoice and payment details' },
    { role: 'finance_manager', reason: 'Approve budget allocation' },
    { role: 'cfo', reason: 'Final authorization for large transfers' },
  ],
});

Parallel Approval

Require approval from any N-of-M approvers:

const task = await HumanOS.createTask({
  description: "Merge PR to production branch",
  approvalPolicy: {
    type: 'threshold',
    required: 2, // Need 2 approvals
    eligible: ['senior_engineer', 'tech_lead', 'architect'],
    timeout: 7200, // 2 hours
  },
});

Human Override

Humans can take control at any time during task execution:

// Agent is executing a task
const task = await HumanOS.getTask(taskId);

if (task.status === 'executing' && task.confidence < 0.6) {
  // Human decides to take over
  await HumanOS.overrideTask(taskId, {
    overriddenBy: 'did:human:alice',
    reason: 'AI confidence too low, switching to manual execution',
    continueExecution: false, // Stop AI, human will finish manually
  });
  
  // Human completes the task manually
  const result = await completeTaskManually(task);
  await HumanOS.completeTask(taskId, result);
}

Feedback Loop

Human feedback trains AI to improve over time:

// After task completion, human provides feedback
await HumanOS.provideFeedback(taskId, {
  quality: 'excellent', // 'poor', 'fair', 'good', 'excellent'
  correctness: true,
  comments: 'AI correctly identified the issue and proposed a good solution',
  improvements: [
    'Could have provided more context about the business impact'
  ],
});

// HumanOS uses this feedback to:
// - Improve AI confidence calibration
// - Train routing decisions
// - Refine capability matching

Gradual Autonomy

Start with heavy human oversight, gradually reduce as AI proves reliable:

// Stage 1: All decisions require approval (Week 1-2)
const task1 = await HumanOS.createTask({
  humanInLoop: { approvalRequired: 'all_decisions' }
});

// Stage 2: Only critical decisions require approval (Week 3-4)
const task2 = await HumanOS.createTask({
  humanInLoop: { approvalRequired: 'critical_only' }
});

// Stage 3: Only low-confidence decisions require approval (Month 2+)
const task3 = await HumanOS.createTask({
  autoEscalation: { confidenceThreshold: 0.75 }
});

// Stage 4: Fully autonomous with spot-checks (Month 3+)
const task4 = await HumanOS.createTask({
  humanInLoop: { enabled: false },
  auditFrequency: 'weekly' // Human reviews audit logs weekly
});
shield:Financial Transactions|Require human approval for wire transfers, large purchases, and budget allocations
building:Code Deployments|Mandate senior engineer approval before deploying to production environments
users:Customer Communications|Review AI-generated emails, support responses, and public statements before sending
network:Policy Changes|Require executive approval for changes to access control, security policies, or governance rules
do:Require approval for irreversible actions (deletions, deployments, financial transactions)
do:Show AI reasoning and confidence levels to approvers
do:Log all approval decisions with cryptographic signatures
do:Implement timeout policies for approval requests
dont:Allow AI to proceed without approval for high-risk tasks
dont:Hide AI confidence levels or alternative options from approvers
dont:Skip human review for actions that affect production systems
dont:Allow approvers without proper authority or context

Next Steps


See Also

← All patterns