root

Nexus Recall - Integration Strategy with Session and Context

Nexus Recall - Integration Strategy

Overview

This document outlines the integration strategy for Nexus Recall, the AI memory system, with existing Nexus infrastructure: Session MCP server and Context MCP server.

Goal: Build a memory system that leverages Beads' architectural patterns while integrating seamlessly with Nexus's existing session management and context storage.


Current Nexus Architecture

Session MCP Server (Port 6645/6646)

Purpose: Session lifecycle management with AI state persistence

Key Features: - Session creation and continuation - AI state persistence (ai_state_save, ai_state_get, ai_state_transfer) - Agenda management for session handoffs - Work item tracking - Session integrity checking

Storage: Redis with key pattern sess:{stable_id}

Relevant for Recall:

# AI State Storage
async def ai_state_save(
    ai_id: str,
    todos: List[Dict] = None,
    working_notes: str = None,
    context_summary: str = None,
    custom_state: Dict = None
) -> Dict

async def ai_state_get(ai_id: str) -> Dict

Context MCP Server (Port 6620/6621)

Purpose: Context, knowledge, protocols, and memory management

Key Features: - Boot protocols for AI recovery - Memory recall/learn pattern - Synaptic Index searching - Protocol management - Summary state creation

Storage: Redis operational layer + NexusDB (FalkorDB)

Relevant for Recall:

# Memory Operations
def recall_memory(key: str, max_length: int = None) -> Dict[str, Any]
def learn_memory(key: str, value: str, metadata: Dict = None) -> Dict
def search_synaptic(pattern: str, limit: int = 10) -> List

Recall Integration Points

1. Session Integration

Link memories to sessions:

Every MemoryBead should track its originating session:

class MemoryBead:
    bead_id: str          # b_XXXX
    session_id: str       # Links to Session MCP
    subsession_id: str    # Optional work block ID
    # ... other fields

Benefits: - Retrieve all memories from a specific session - Session migration transfers associated memories - Session integrity checks include memory validation

Integration Methods:

# Create bead during session
async def create_bead_for_session(session_id: str, content: str, **kwargs) -> str:
    """
    Create a memory bead linked to current session.
    Uses Session MCP to validate session exists.
    """
    # Validate session
    session = await session_mcp.current()
    if session['session_id'] != session_id:
        raise ValueError("Session mismatch")

    # Create bead
    bead = MemoryBead(
        bead_id=generate_id('b'),
        session_id=session_id,
        content=content,
        **kwargs
    )

    # Store in Recall
    await recall_storage.create_bead(bead)

    # Link in Session (optional)
    await session_mcp.item(
        title=f"Memory: {bead.title}",
        item_type="memory",
        reference=f"recall:{bead.bead_id}"
    )

    return bead.bead_id

# Retrieve session memories
async def get_session_memories(session_id: str) -> List[MemoryBead]:
    return await recall_storage.get_by_session(session_id)

# Session migration with memories
async def migrate_session_with_memories(
    old_session_id: str,
    new_session_id: str
) -> Dict:
    # Migrate session (Session MCP)
    await session_mcp.migrate(new_session_id)

    # Update memory bead session links
    beads = await get_session_memories(old_session_id)
    for bead in beads:
        await recall_storage.update_bead(
            bead.bead_id,
            {'session_id': new_session_id}
        )

    return {'migrated_beads': len(beads)}

2. Context Integration

Use Context MCP as foundation:

Recall should extend Context MCP's recall/learn pattern:

# Context MCP (existing)
context.recall(key)       # Simple key-value recall
context.learn(key, value) # Simple key-value storage

# Recall MCP (new)
recall.create_bead(...)      # Structured memory creation
recall.search_beads(query)   # Semantic search
recall.get_related(bead_id)  # Graph traversal

Storage Strategy:

Context MCP (existing):
  ↓
  Redis operational layer
  ↓
  FalkorDB (NexusDB)

Recall MCP (new):
  ↓
  FalkorDB graph nodes/edges  
  ↓
  Same backend, richer structure

Integration Methods:

# Unified recall interface
async def unified_recall(query: str) -> Dict:
    """
    Search both Context (simple k-v) and Recall (graph beads).
    """
    # Try Context first (faster)
    context_result = await context_mcp.recall(query)

    # Then Recall (semantic search)
    beads = await recall_mcp.search_beads(query, limit=10)

    return {
        'context': context_result,
        'beads': beads,
        'unified': True
    }

# Promote Context memory to Bead
async def promote_to_bead(context_key: str) -> str:
    """
    Convert simple Context memory to structured Bead.
    """
    # Get from Context
    memory = await context_mcp.recall(context_key)

    # Create Bead
    bead_id = await recall_mcp.create_bead(
        title=context_key,
        content=memory.get('value', ''),
        metadata={'promoted_from': context_key},
        bead_type='fact'
    )

    return bead_id

3. AI Groups Integration

Share memories across group members:

class MemoryBead:
    bead_id: str
    group_id: str         # AI Group ID (g_XXXX)
    created_by: str       # AI ID (o_XXXX or a_XXXX)
    visibility: str       # 'private', 'group', 'global'

Use cases:

  1. Ops shares context with agents:
# Ops creates shared memory
await recall_mcp.create_bead(
    title="Project Architecture Decision",
    content="We're using FalkorDB for graph storage...",
    group_id='g_tcrn',
    visibility='group'
)

# Agent retrieves group memories
group_memories = await recall_mcp.get_by_group('g_tcrn')
  1. Agent reports findings to ops:
# Agent creates memory for ops
await recall_mcp.create_bead(
    title="Beads Analysis Complete",
    content="Key findings: three-layer architecture...",
    group_id='g_tcrn',
    created_by='a_xyz1',
    visibility='group'
)

# Ops retrieves via aimsg or direct Recall query
  1. Context transfer during ops handoff:
# Transfer ops role with memories
await aimsg.transfer_ops(
    group_id='g_tcrn',
    from_ops_id='o_abc1',
    to_agent_id='a_xyz2',
    context_summary="..."
)

# New ops retrieves group memories
beads = await recall_mcp.get_by_group('g_tcrn')

Recall MCP Server Design

Tool Structure

# Core CRUD
recall.create_bead(title, content, bead_type, tags, metadata, session_id, group_id)
recall.get_bead(bead_id)
recall.update_bead(bead_id, updates)
recall.delete_bead(bead_id)

# Search & Retrieval
recall.search_beads(query, limit=20, filters={})  # Semantic + keyword
recall.find_similar(bead_id, threshold=0.7)       # Vector similarity
recall.get_by_session(session_id)                 # Session filter
recall.get_by_group(group_id)                     # Group filter
recall.get_by_tag(tag)                            # Tag filter
recall.get_recent(limit=10, since=None)           # Temporal query

# Relationships
recall.add_relationship(from_id, to_id, rel_type, metadata={})
recall.get_relationships(bead_id, rel_type=None)
recall.traverse_graph(start_id, max_depth=3, rel_types=[])

# Advanced
recall.deduplicate_beads(bead_id)                 # Find duplicates via content hash
recall.rank_by_importance(bead_ids)               # Sort by importance score
recall.decay_old_memories(threshold_days=90)      # Mark old beads as archived

Port Assignment

Recall MCP Server: - Operational Port: 6680 - User Port: 6681 - Redis DB: 17 (next available)

Storage Schema (FalkorDB)

Nodes:

CREATE (b:MemoryBead {
  bead_id: 'b_abc123',
  title: 'Architecture Decision',
  content: 'We decided to use...',
  content_hash: 'sha256...',
  bead_type: 'decision',
  importance: 8,
  tags: ['architecture', 'recall'],
  created_at: timestamp(),
  accessed_at: timestamp(),
  created_by: 'o_mhx8',
  session_id: 's_pjcc',
  group_id: 'g_tcrn',
  visibility: 'group'
})

Edges:

CREATE (b1)-[:FOLLOWS_FROM {strength: 0.9, created_at: timestamp()}]->(b2)
CREATE (b1)-[:RELATED_TO {similarity: 0.85}]->(b3)
CREATE (b1)-[:UPDATES {replaces: true}]->(b4)
CREATE (b1)-[:PART_OF]->(b_parent)

Vector Embeddings

For semantic search:

# Generate embedding on bead creation
import openai

async def create_bead_with_embedding(title: str, content: str, **kwargs):
    # Generate embedding
    embedding = await openai.Embedding.create(
        input=f"{title}\n{content}",
        model="text-embedding-3-small"
    )

    # Store in Redis for fast similarity search
    bead_id = generate_id('b')
    await redis.hset(
        f"bead:{bead_id}:embedding",
        mapping={'vector': embedding.data[0].embedding}
    )

    # Store bead in FalkorDB
    await falkor.execute(
        "CREATE (b:MemoryBead $props)",
        props={'bead_id': bead_id, 'title': title, 'content': content, **kwargs}
    )

    return bead_id

# Semantic search
async def search_beads(query: str, limit: int = 20) -> List[MemoryBead]:
    # Generate query embedding
    query_embedding = await openai.Embedding.create(
        input=query,
        model="text-embedding-3-small"
    )

    # Vector similarity search (Redis VSS or pgvector)
    similar_ids = await vector_search(
        query_embedding.data[0].embedding,
        limit=limit,
        threshold=0.7
    )

    # Fetch full beads from FalkorDB
    beads = []
    for bead_id in similar_ids:
        bead = await get_bead(bead_id)
        beads.append(bead)

    return beads

Implementation Phases

Phase 1: Foundation (Week 1)

Define data structures: - MemoryBead dataclass - BeadRelationship dataclass - Relationship type enums

Set up storage: - FalkorDB schema (nodes + edges) - Redis caching layer - Vector storage (Redis VSS or separate)

Build RecallStorage interface: - CRUD operations - Basic query methods

Phase 2: MCP Server (Week 2)

Create Recall MCP server: - FastMCP server setup - Port configuration (6680/6681) - Tool definitions

Implement core tools: - create_bead, get_bead, update_bead, delete_bead - search_beads (keyword only initially) - get_by_session, get_by_group

Integration testing: - Test with Session MCP - Test with Context MCP

Phase 3: Relationships (Week 3)

Implement relationship tools: - add_relationship - get_relationships - traverse_graph

Auto-relationship detection: - Detect similar beads on creation - Auto-link session-related beads - Temporal relationships (follows_from)

Phase 4: Semantic Search (Week 4)

Vector embeddings: - OpenAI embedding generation - Vector storage integration - Similarity search

Advanced retrieval: - Hybrid search (keyword + semantic) - Importance-weighted ranking - Temporal decay for old memories

Phase 5: AI Groups Integration (Week 5)

Group memory support: - Visibility controls (private/group/global) - Group-filtered queries - Ops ↔ Agent memory sharing

Context transfer: - Ops handoff with memories - Agent pool with shared context


Technical Decisions

Storage Backend: FalkorDB

Why FalkorDB: - ✅ Already deployed in Nexus (NexusDB) - ✅ Graph database (perfect for relationships) - ✅ Redis-compatible (fast queries) - ✅ Supports Cypher queries - ✅ Integrated with Context MCP

Alternative considered: Neo4j (rejected - adds complexity, FalkorDB sufficient)

Vector Storage: Redis VSS

Why Redis VSS: - ✅ Same Redis instance as operational data - ✅ Fast vector similarity search - ✅ No additional infrastructure

Alternative considered: pgvector (rejected - don't need PostgreSQL)

Embedding Model: OpenAI text-embedding-3-small

Why: - ✅ Fast and cost-effective - ✅ 1536 dimensions (good balance) - ✅ Already using OpenAI API

Alternative considered: Local models (rejected - adds deployment complexity)

ID Format: b_XXXX

Why: - ✅ Consistent with Nexus conventions (g_XXXX, o_XXXX, a_XXXX, s_XXXX) - ✅ "b" for "bead" - ✅ Easy to identify in logs/references


Success Metrics

Performance

  • Bead creation: < 100ms
  • Direct lookup: < 10ms
  • Semantic search: < 500ms (20 results)
  • Graph traversal (depth 3): < 200ms

Functionality

  • ✅ Create and retrieve beads
  • ✅ Session integration (link beads to sessions)
  • ✅ Group memory sharing
  • ✅ Relationship traversal
  • ✅ Semantic search with relevance ranking
  • ✅ Deduplication via content hashing

Reliability

  • Uptime: 99.9%
  • Data durability: No bead loss
  • Transaction consistency: All relationship operations atomic

Open Questions for Primary Ops

  1. Priority: Should Recall be prioritized over other g_tcrn work?
  2. Scope: Start with Phase 1-2 only, or full implementation?
  3. Agent assignment: Should this be delegated to an agent, or ops-led?
  4. Testing strategy: Build minimal prototype first, or comprehensive design?
  5. Timeline: Target completion date?

References

  • Session MCP: /opt/mcp-servers/session/mcp_session_server.py
  • Context MCP: /opt/mcp-servers/context/mcp_context_server.py
  • Beads Analysis: KB article fdfede6e
  • AI Groups Architecture: KB article e70c95b4
ID: 3fa0aae2
Path: Nexus Recall - Integration Strategy with Session and Context
Updated: 2026-01-13T12:51:40