Delegate MCP Server - Comprehensive Documentation
Overview
The Delegate server provides agentic AI delegation - the ability to spawn autonomous AI agents that iteratively use tools until a task is complete. Unlike simple one-shot LLM calls, delegates can think, act, observe results, and adapt.
Key Innovation: Delegates run in the background, freeing the main conversation to continue while work happens asynchronously.
Server: nexus-delegate
Version: 3.1.0
Location: /opt/mcp-servers/delegate/mcp_delegate_server.py
Architecture
The Problem That Was Solved
The Gateway MCP architecture uses a subprocess-per-call model - each tool invocation spawns a fresh Python process that exits after returning. This killed any background work:
# THIS FAILED - task dies when subprocess exits
asyncio.create_task(self._run_delegation(delegation_id))
return {"delegation_id": delegation_id} # Process exits, task killed
The Solution: Redis Queue Architecture
βββββββββββββββββββ βββββββββββββββ ββββββββββββββββββββ
β delegate.run() ββββββΆβ Redis Queue ββββββΆβ delegate_worker β
β (MCP Server) β β (6770/6771) β β (pm2 daemon) β
βββββββββββββββββββ βββββββββββββββ ββββββββββββββββββββ
β β β
β Returns immediately β Persists task β Executes agentic loop
βΌ βΌ βΌ
delegation_id delegate:{id} LLM calls + tools
Components: 1. mcp_delegate_server.py - MCP interface, queues tasks to Redis 2. delegate_worker.py - Standalone daemon, polls queue, executes tasks 3. Redis 6770/6771 - Task persistence and status tracking
Ports & Credentials
| Component | Port | Purpose |
|---|---|---|
| Delegate Vault | 6770 | Write operations, queue, status |
| Delegate Operational | 6771 | Read-only queries |
Locker: l_026b contains:
- l_fdff - vault_port (6770)
- l_44d6 - vault_password
- l_3bc0 - operational_port (6771)
Redis Key Structure
delegate:queue # LIST - pending delegation IDs (BLPOP)
delegate:{id} # HASH - delegation metadata & status
delegate:{id}:memory # LIST - working memory entries
delegate:{id}:tools # LIST - tool call history
delegate:user:{uid}:active # SET - user's active delegations
# Orchestration
delegate:mission:{id} # HASH - mission metadata
delegate:mission:{id}:workers # HASH - worker states
delegate:user:{uid}:missions # SET - user's missions
Exposed Tools
1. delegate.run - Start Delegation
Start an autonomous AI agent that works in background.
Parameters: | Param | Type | Required | Description | |-------|------|----------|-------------| | user_id | string | β | User ID | | content | string | β | Task content/context | | task_type | string | | summary_state, code_review, code_write, web_summarize, research, simple, agentic | | prompt | string | | Custom prompt override | | model | string | | Model (e.g., anthropic/claude-sonnet-4, openai/gpt-4o) | | tools | array | | MCP tools to enable | | max_iterations | int | | Max tool loops (default: 10) | | notify | bool | | Voice notify on complete (default: true) | | group_id | string | | AI Group to join | | report_to | string | | Ops ID to report to |
Returns: { delegation_id: "abc123", status: "queued", ... }
2. delegate.status - Check Status
Parameters: | Param | Type | Required | Description | |-------|------|----------|-------------| | delegation_id | string | β | Delegation ID | | include_memory | bool | | Include full conversation (default: false) |
Returns: Status, iteration count, tools used, result
3. delegate.list - List Delegations
Parameters: | Param | Type | Required | Description | |-------|------|----------|-------------| | user_id | string | β | User ID | | status_filter | string | | running, completed, failed, all |
4. delegate.orchestrate - Multi-Worker Mission
Start multiple parallel workers with different models.
Parameters: | Param | Type | Required | Description | |-------|------|----------|-------------| | user_id | string | β | User ID | | mission | string | β | Overall mission description | | workers | array | β | Worker definitions (name, task, model) | | aggregate | bool | | Aggregate results (default: true) | | aggregate_model | string | | Model for aggregation |
5. delegate.mission_status - Check Mission
6. delegate.list_missions - List Missions
Default Models by Task Type
DEFAULT_MODELS = {
"summary_state": "google/gemini-2.0-flash-001",
"code_review": "openai/gpt-4o",
"code_write": "openai/gpt-4o",
"web_summarize": "deepseek/deepseek-chat",
"research": "google/gemini-pro",
"simple": "openai/gpt-4o-mini",
"agentic": "anthropic/claude-sonnet-4",
"default": "google/gemini-2.0-flash-001"
}
Tools Available to Delegates
Delegates can use these MCP tools during execution:
Context Tools:
- context_summary_state - Save summary state
- context_learn - Store knowledge
- context_recall - Recall knowledge
Voice Tools:
- voice_speak_quick - TTS notification
CRM Tools:
- contact_create, contact_search, contact_get, etc.
Project Tools:
- track_create, track_add_task, etc.
Knowledge Base:
- kb_create, kb_get, kb_search, etc.
Search:
- search_web - Web search
- search_documents - Internal document search
Control:
- task_complete - Signal completion (REQUIRED)
- spawn_delegate - Spawn sub-delegate
- check_delegate - Check sub-delegate status
AI Groups Integration
Delegates can join AI Groups and report to ops managers.
How It Works
- On delegate.run() with
group_idandreport_to: - Worker joins group as
a_del_{delegation_id[:4]} -
Sends "STARTING" message to ops
-
During execution:
-
(Future: progress messages)
-
On completion:
- Sends detailed "COMPLETED" report
- Includes iterations, tools used, result
-
Leaves group automatically
-
On failure:
- Sends "FAILED" report with error
- Leaves group
Example Usage
gateway.run([{
server: 'delegate',
tool: 'run',
args: {
user_id: 'u_z1p5',
content: 'Research quantum computing advances in 2025',
task_type: 'research',
group_id: 'g_7emp', # Join this AI Group
report_to: 'o_ulgh' # Report to this ops
}
}])
Delegate Worker Daemon
Process Management
# Start/restart
pm2 restart delegate-worker
# View logs
pm2 logs delegate-worker
# Status
pm2 status delegate-worker
Worker Loop
while self.running:
# BLPOP blocks until task available
result = self.vault.blpop('delegate:queue', timeout=30)
if result:
_, delegation_id = result
await self.run_delegation(delegation_id)
Agentic Execution Loop
while iteration < max_iterations and not task_completed:
# 1. Call LLM with working memory
response = await self.call_openrouter(model, working_memory, tools)
# 2. Extract response and tool calls
tool_calls = response.choices[0].message.tool_calls
# 3. Execute each tool
for tc in tool_calls:
result = await self.execute_tool(tc.name, tc.args)
working_memory.append(tool_result)
# 4. Check for completion
if tc.name == "task_complete":
task_completed = True
break
iteration += 1
Configuration Files
/data/nexus3/delegate/vault/redis.conf
port 6770
requirepass [from locker]
dir /data/nexus3/delegate/vault/data
appendonly yes
/data/nexus3/delegate/operational/redis.conf
port 6771
slaveof 127.0.0.1 6770
masterauth [from locker]
dir /data/nexus3/delegate/operational/data
Historical Note
This architecture was developed on January 7, 2026 during the first-ever AI agent team coordination session (AI Group g_7emp). Agent "Vader" (a_kig0) identified the critical bug where Gateway's subprocess model killed background tasks, and implemented the Redis queue solution.
This was part of the Nexus Environment Audit - KB Documentation project, one of the first missions executed by a coordinated AI team under human oversight.
Version History
- v3.1 (2026-01-07): Redis queue architecture, AI Groups integration
- v3.0 (2026-01-06): Initial MCP server with asyncio (broken)
- v2.x: Earlier iterations
Documentation by Agent Vader (a_kig0) | KB node updated by Rocky (o_jugt) | AI Group g_7emp | January 7, 2026