Building a Reference-Grade Market Intelligence Workflow on HUMΛN
Why we built Signals as the reference workflow
Every platform needs a reference implementation — something that shows builders not just what's possible but how to do it well. For HUMΛN workflows, that reference is Signals, a market intelligence workflow that continuously monitors competitors, surfaces intelligence, and generates role-specific artifacts.
Signals was chosen because it touches every hard thing:
- Multi-source, multi-agent coordination (not just one LLM call)
- Human-in-the-loop gates at the right moments (not everywhere)
- Learning from feedback from day one (not bolted on later)
- Multiple delivery surfaces with zero-config defaults and opt-in power
- Policy and trust/safety enforcement without developer overhead
If you can read Signals and understand every decision, you can build anything on HUMΛN.
The pipeline in 8 components
Signals is a linear pipeline with a fan-out in the middle:
Source Scout ← monitors approved sources, emits raw candidates
↓
Signal Normalizer ← stable schema, dedup within 72-hour window
↓
Trust & Safety Gate ← PII, provenance, source allowlist — runs after normalization*
↓
Signal Judge ← scores relevance/novelty/urgency/confidence; escalates if needed
↓
Opportunity Router ← scored matrix, fan-out to personas
↓
Artifact Workers ← 6 specialist workers, parallel generation
↓
Delivery Orchestrator ← surface matrix; Companion + CP always on
↓
Learning Engine ← aggregates all feedback, proposes tuning
* Why Trust & Safety runs after normalization: the safety checks operate on NormalizedSignal fields (structured content type, evidence URLs, canonical entity tags). Raw candidates from the scout are too unstructured to reliably detect PII or verify provenance. Normalizing first gives T&S a clean, typed input — a more reliable signal to check.
Each component is a standalone HUMΛN agent registered with the runtime. They can be called independently, tested with a mock context, or replaced with your own implementation. The orchestrator (workflow.ts) ties them together:
// workflow.ts — each step is a ctx.call.agent() invocation
// The runtime routes by capability string. Each agent declares which
// capabilities it satisfies. This is HUMΛN capability-first routing.
const scoutResult = await ctx.call.agent<SourceScoutOutput>(
'signals.scout.monitor', // ← capability string, not agent ID
{ org_did, workflow_run_id, watched_entities, source_families }
);
const normResult = await ctx.call.agent<SignalNormalizerOutput>(
'signals.normalize',
{ raw_candidates: scoutResult.data.raw_candidates, org_did, workflow_run_id }
);
const trustResult = await ctx.call.agent<TrustSafetyOutput>(
'signals.trust_safety.gate', // ← matches trust-safety.ts CAPABILITIES
{ signals: normResult.data.normalized_signals, org_did, workflow_run_id }
);
No hardcoded imports. No direct function calls between agents. The platform resolves the right agent at runtime — which means you can swap an implementation without touching the orchestrator.
The design decisions that matter
1. Capability-first routing, not hardcoded agent imports
The single most important pattern in Signals: the orchestrator never imports agent modules directly. Every inter-agent call goes through ctx.call.agent() with a capability string.
// ❌ Don't do this — couples orchestrator to implementation
import { signalJudgeHandler } from './signal-judge.js';
const verdicts = await signalJudgeHandler.execute(ctx, { signals: ... });
// ✅ Do this — capability-first routing
const judgeResult = await ctx.call.agent<SignalJudgeOutput>(
'signals.judge.score',
{ signals: trustOutput.safe_signals, org_did, workflow_run_id }
);
This is how worker replacement works. Replace the agent that satisfies 'signals.judge.score' and the orchestrator never changes.
2. Human review is a fire-and-forget gate, not a synchronous blocker
The two artifact types that require review — PRD drafts and integration assessments — call ctx.approval.request(), which routes the artifact to Workforce Cloud as a work item. The orchestrator continues delivering other artifacts in parallel.
// In artifact-workers.ts — PRD always requires Workforce Cloud review
if (workforceInstallationId) {
await ctx.approval.request({
installation_id: workforceInstallationId,
renderer_id: 'prd-review', // matches workforce_module.work_item_renderers[id]
artifact_id: (artifact as { artifact_id: string }).artifact_id,
urgency: signalUrgencyToApprovalUrgency(signal.urgency),
metadata: {
entity: signal.entity_name,
signal_confidence: signal.confidence,
workflow_run_id: input.workflow_run_id,
},
});
}
// Delivery continues for other artifacts — not blocked by this review
This is the right mental model: human review is asynchronous. The artifact sits in Workforce Cloud until a PM approves it; the rest of the workflow has already delivered other briefs and briefed other personas.
3. Learning is always on — and zero config
The Learning Engine doesn't ask permission to emit feedback signals. Every component emits a typed event at the end of every run:
// The canonical feedback pattern — used identically in all 8 components
await ctx.events.emit('humanos.signals.feedback', {
feedback_type: 'verdict_signal', // ← typed, not free-form string
source: 'signal-judge',
workflow_run_id: input.workflow_run_id,
org_did: input.org_did,
agent_id: AGENT_ID,
signal_strength: passedCount / totalCount,
metadata: {
total_signals: signals.length,
passed: passedCount,
suppressed: signals.length - passedCount,
avg_confidence: /* computed */,
compliance_escalated: complianceEscalatedCount,
},
});
The org admin can turn off proposals (learning.enabled = false), but the event emission is baked into every component. The system is learning from run one — you'll see proposals in the CP before you've configured anything.
4. Delivery surfaces follow "magic by default, control when needed"
Two surfaces are always on with zero config:
- Companion queue — every artifact lands here on install
- CP signal feed — visible in the Command Plane signals dashboard
Additional surfaces (Slack, email, Google Docs, filesystem) are configured via connector once (~30 seconds) and then referenced in the routing config per persona.
The design principle: see real value in Companion within minutes of install, then progressively add surfaces. The full delivery surface matrix is in delivery-orchestrator.ts and reference-config-pack.ts.
5. Escalation is built in, not bolted on
Signal Judge applies three distinct escalation patterns, each using the full EscalationRequest interface:
// Low-confidence signals → expert review (Fourth Law: AI must know when it doesn't know)
if (signal.confidence < minConfidence) {
await ctx.escalate(buildSignalConfidenceReviewEscalation(signal, scoring, input.org_did));
}
// Compliance signals → always routed to compliance team
if (scoring.is_compliance) {
await ctx.escalate(buildComplianceEscalation(signal, scoring, input.org_did));
}
// Urgent strategic signals → immediate executive notification
if (scoring.relevance_score >= 0.9 && signal.urgency !== 'low') {
await ctx.escalate(buildUrgentStrategicEscalation(signal, scoring, input.org_did));
}
Each escalation is fire-and-forget — the pipeline continues. The escalation schemas (defined in signals-escalation-schemas.ts) carry the full context: reason, question to the reviewer, allowed actions, required capability, routing mode, and a structured UI definition for the Workforce Cloud work item. See the source for examples of what a well-formed escalation request looks like.
How to fork Signals for your own domain
The Signals pipeline is domain-agnostic up to Signal Normalizer. After normalization, everything is typed by signal_type. To build your own domain-specific workflow:
- Copy the source scout — replace the
ctx.call.agent('connector.your-source.method', ...)calls for your sources - Keep the normalizer and judge — they work on any
NormalizedSignal - Customize the router — change the persona matrix weights in
reference-config-pack.ts - Replace the workers — write your own artifact generators that call
ctx.llm.complete()andctx.artifacts.create() - Keep delivery and learning as-is — they're domain-agnostic
The WorkflowManifestV1 from @human/platform-extensions is your registration contract. Declare your workflow's triggers, steps, workers, CP extension, Companion module, and Workforce module in one manifest.
What "reference grade" means in practice
A reference implementation isn't just working code — it's code that teaches the right patterns. Here's the checklist we used for Signals:
- Every LLM call uses
ctx.llm.complete({ prompt: messages, temperature, maxTokens })— never direct OpenAI/Anthropic imports - Every agent emits a mandatory
ctx.events.emit('humanos.signals.feedback', {...})at end of run — not optional - Artifacts are created with
ctx.artifacts.create({ kind, body, production, metadata })with fullAgentChainEntryprovenance - HITL gates use
ctx.approval.request({ installation_id, renderer_id, artifact_id, urgency, metadata })— PRD drafts are never auto-approved - Connectors are called via
ctx.call.agent('connector.<id>.<method>', input)— never direct HTTP or imported connector modules - Config comes from
input(resolved envelope, not from a separate config service) — all defaults live inreference-config-pack.ts - Learning proposals flow via
ctx.events.emit('humanos.learning.proposal', {...})— never direct DB writes - Escalations use
ctx.escalate(buildXxxEscalation(...))with structured schemas fromsignals-escalation-schemas.ts
If every component in your workflow hits all 8, you're building reference-grade.
What's next
In the next article, we'll show you how the manifest system wires a workflow into the Control Plane, Companion, and Workforce Cloud — and how all three surfaces are declared in a single humanos.workflow.v1 manifest.
Signals is open source and installable from the HUMΛN Marketplace. Fork it, break it, rebuild it.
Questions? Start a thread in the HUMΛN Builders community → community.builtwithhuman.com
Signals reference workflow — Part 1 of 3