How Companion routes to humans when AI judgment is not enough
Primary concern
You will trust Companion with real work only if failure modes are visible and recoverable.
The escalation trigger
When confidence falls below policy thresholds — or when a step is irreversible or externally binding — Companion does not “try harder.” It routes to a human with scoped context: what was attempted, what evidence exists, and what policy demands.
What the human sees
Not a raw model transcript — a work item: structured inputs, proposed action, alternatives, and explicit uncertainty. The goal is decision quality, not chat continuity alone.
What the audit trail looks like
A defensible narrative: actor identity, delegation scope, policy version, human decision, and outcome. This is how Companion stays aligned with Workforce and governance surfaces — not a siloed chat bubble.
Why this beats “someone reviews it”
Review without provenance is theater. Review with receipts is operations.