Every Workflow You Build is Instantly an AI Capability
We call it the DAEQ model: every workflow on HUMΛN is automatically Discoverable, Accessible, Executable, and Queryable by any AI agent. This post explains what that means in practice and why it matters.
The problem with "integrating" AI into workflows
The standard approach goes like this:
- Build a workflow
- Separately write an API wrapper for it
- Write an OpenAPI spec
- Maintain an MCP adapter pointing at the spec
- Keep everything in sync as the workflow evolves
This is four artifacts for one thing. Every time your workflow changes, you update four places. When they drift, the AI client either breaks or silently does the wrong thing.
HUMΛN collapses this to one artifact: the workflow itself.
How it works
When you build a workflow in HUMΛN, we know everything about it: its ID, name, description, trigger type, and (optionally) its input schema. That's all an AI tool definition needs.
Your org's MCP endpoint (acme.mcp.haio.run) dynamically generates tools/list from your actual capability graph — every workflow, agent, and connected external tool, always in sync, never a separate doc to maintain.
Paste acme.mcp.haio.run into Claude Desktop. You're done.
Magic by Default — the important nuance
Workflows without a declared schema still work. We return inputSchema: {} for them in tools/list, which means AI clients accept any input. This is intentional: you don't have to declare a schema to get the benefit. The workflow runs, governance fires, provenance is logged.
When you want Claude to understand the shape of your inputs — to give the user good prompts, to reject malformed calls before they reach your workflow — you declare a schema:
workflow
.schema({
type: 'object',
properties: {
invoiceId: { type: 'string' },
amount: { type: 'number', description: 'USD' },
},
required: ['invoiceId', 'amount'],
})
That's it. tools/list now returns proper type hints. Claude knows what to ask for.
This is what we mean by Magic by Default: zero-config works correctly; declaration unlocks progressively more control.
The governance part isn't optional
Here's what Make.com does: it proxies tool calls through a token auth layer. The token says "you're allowed to call this workflow." That's it. No per-call delegation scope. No audit record per invocation. No boundary contracts.
HUMΛN routes every tools/call — regardless of who sent it — through POST /v1/agents/call. That means:
- Delegation is scoped: the AI client's token has a specific set of scopes; calls outside that scope fail at the gateway, not in the workflow.
- Boundary contracts fire: workflows marked
never-write-without-approvalstop at the gateway; the AI can't bypass them. - Every call is ledgered: LedgerNode logs every invocation cryptographically. Auditors can see exactly what AI called, with what input, under what delegation, and what the outcome was.
This is not a dashboard feature. It's structural. You cannot turn it off. The AI cannot route around it.
Workflow-to-workflow handoff
Workflows can now call other workflows as first-class steps:
const dealClosure = createWorkflow('deal-closure')
.step('qualify', { type: 'agent', config: { agentId: 'agent_qualifier' } })
.workflowHandoff('invoice-approval-step', {
targetWorkflowId: 'invoice-approval',
input: { amount: '{{context.deal.value}}', invoiceId: '{{context.invoice.id}}' },
timeoutMs: 48 * 60 * 60 * 1000, // wait up to 48 hours
})
.step('generate-contract', { type: 'agent', config: { agentId: 'agent_contracts' } })
.build();
The parent workflow durably pauses (state saved in Inngest), the Invoice Approval workflow runs — potentially for hours, with its own human approval gate — and the parent resumes with the approval result. The parent's delegation scope narrows the child's: the child can't do more than the parent was authorized to do.
The full provenance chain flows through: the child's ledger entry references parent_run_id and parent_delegation_did. Auditors get the complete tree.
What this means for your architecture
Before:
Workflow → API endpoint → OpenAPI spec → MCP adapter → AI client
(4 artifacts, manual sync, drift risk)
After:
Workflow → AI client
(1 artifact, always in sync, governance automatic)
The AI can call your Invoice Approval workflow the moment you click Save. The tools/list response reflects the change immediately. You wrote one thing. It's in Claude.
Connecting external tools
If you have external MCP servers (GitHub tools, database queries, Slack messages), connect them once:
human mcp connect https://github.example.com/mcp
HUMΛN calls tools/list on the remote, registers each tool as a capability node, mints a cryptographic identity for the server, and adds everything to your capability mesh. The tools immediately appear alongside your workflows in tools/list.
You don't need separate credentials per tool, separate scopes per server, or separate audit trails. One human mcp connect, everything is governed.
Try it
- Build any manual-triggered workflow in the Workflow Builder
- Get a delegation token in Settings → MCP Access
- Add your org endpoint to Claude Desktop or Cursor
- Ask Claude "what can you do?" — it'll list your workflows
The schema and governance are already there. You just built the workflow.
Further reading:
- MCP Capability Mesh — developer guide — schema declaration, org endpoint, workflow handoff, Capability Pack authoring
- Connecting an external MCP server — one-URL flow, trust tiers, capability node verification
- Why governed workflows matter