Shared Memory
Provides shared memory for agentic teams to improve token efficiency and coordination.
Shared Memory MCP Server
Solving coordination tax in agentic teams - where Opus + 4 Sonnets burns 15x tokens but only gets 1.9x performance.
Prerequisites
- Node.js 18+
- npm or yarn
- Claude Desktop (for MCP integration)
The Problem
Current agentic team patterns have terrible token efficiency:
- Traditional: 1 request × 4K tokens = 4K tokens
- Agentic Team: 1 coordinator + 4 workers × 12K tokens each = 48K+ tokens
- Efficiency: 1.9x performance / 15x cost = 12% efficiency
This MCP server provides shared memory for agentic teams to achieve 6x token efficiency while maintaining coordination benefits.
Core Features
1. Context Deduplication
- Store shared context once, reference by key
- 10:1 compression ratio with intelligent summarization
- Workers get 100-token summaries instead of full context
2. Incremental State Sharing
- Append-only discovery system
- Workers share findings in real-time
- Delta updates prevent retransmission
3. Work Coordination
- Claim-based work distribution
- Dependency tracking and resolution
- Reactive task handoff between workers
4. Token Efficiency
- Context compression and lazy loading
- Delta updates since last version
- Expansion on demand for specific sections
Installation
# Clone the repository
git clone https://github.com/haasonsaas/shared-memory-mcp.git
cd shared-memory-mcp
# Install dependencies
npm install
# Build the server
npm run build
Quick Start
# Run in development mode
npm run dev
# Or run the built server
npm start
# Test the agentic workflow
npm test
# or
npm run test-workflow
Usage Example
// 1. Create agentic session (coordinator)
const session = await mcp.callTool('create_agentic_session', {
coordinator_id: 'opus-coordinator-1',
worker_ids: ['sonnet-1', 'sonnet-2', 'sonnet-3', 'sonnet-4'],
task_description: 'Analyze large codebase for performance issues',
codebase_files: [...], // Full context stored once
requirements: [...],
constraints: [...]
});
// 2. Workers get compressed context (not full retransmission)
const context = await mcp.callTool('get_worker_context', {
session_id: session.session_id,
worker_id: 'sonnet-1'
}); // Returns summary + reference, not full context
// 3. Publish work units for coordination
await mcp.callTool('publish_work_units', {
session_id: session.session_id,
work_units: [
{ unit_id: 'analyze-auth', type: 'security', priority: 'high' },
{ unit_id: 'optimize-db', type: 'performance', dependencies: ['analyze-auth'] }
]
});
// 4. Workers claim and execute
await mcp.callTool('claim_work_unit', {
session_id: session.session_id,
unit_id: 'analyze-auth',
worker_id: 'sonnet-1',
estimated_duration_minutes: 15
});
// 5. Share discoveries incrementally
await mcp.callTool('add_discovery', {
session_id: session.session_id,
worker_id: 'sonnet-1',
discovery_type: 'vulnerability_found',
data: { vulnerability: 'SQL injection in auth module' },
affects_workers: ['sonnet-2'] // Notify relevant workers
});
// 6. Get only new updates (delta, not full context)
const delta = await mcp.callTool('get_context_delta', {
session_id: session.session_id,
worker_id: 'sonnet-2',
since_version: 5 // Only get changes since version 5
});
Architecture
┌─────────────────┐ ┌─────────────────┐
│ Opus Coordinator│ │ Shared Memory │
│ │────│ MCP Server │
│ - Task Planning │ │ │
│ - Work Units │ │ - Context Store │
│ - Coordination │ │ - Discovery Log │
└─────────────────┘ │ - Work Queue │
│ - Dependencies │
┌─────────────────┐ └─────────────────┘
│ Sonnet Workers │ │
│ │───────────┘
│ - Specialized │
│ - Parallel │ ┌─────────────────┐
│ - Coordinated │ │ Token Efficiency│
└─────────────────┘ │ │
│ 48K → 8K tokens │
│ 6x improvement │
│ 1200% better ROI│
└─────────────────┘
Token Efficiency Strategies
Context Compression
// Instead of sending full context (12K tokens):
{
full_context: { /* massive object */ }
}
// Send compressed reference (100 tokens):
{
summary: "Task: Analyze TypeScript codebase...",
reference_key: "ctx_123",
expansion_hints: ["codebase_files", "requirements"]
}
Delta Updates
// Instead of retransmitting everything:
get_full_context() // 12K tokens each time
// Send only changes:
get_context_delta(since_version: 5) // 200 tokens
Lazy Loading
// Workers request details only when needed:
expand_context_section("codebase_files") // 2K tokens
request_detail("file_content", "auth.ts") // 500 tokens
API Reference
Session Management
create_agentic_session- Initialize coordinator + workersget_session_info- Get session detailsupdate_session_status- Update session state
Context Management
get_worker_context- Get compressed context for workerexpand_context_section- Get detailed section dataget_context_delta- Get incremental updates
Work Coordination
publish_work_units- Publish available workclaim_work_unit- Claim work for executionupdate_work_status- Update work progress
Discovery Sharing
add_discovery- Share findings with teamget_discoveries_since- Get recent discoveries
Dependency Resolution
declare_outputs- Declare future outputsawait_dependency- Wait for dependencypublish_output- Publish output for others
MCP Configuration
For Claude Desktop
-
Copy the example configuration:
cp claude-desktop-config.example.json claude-desktop-config.json -
Edit
claude-desktop-config.jsonand update the path to your installation:{ "mcpServers": { "shared-memory": { "command": "node", "args": ["/absolute/path/to/shared-memory-mcp/dist/server.js"] } } } -
Add this configuration to your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
- macOS:
Note: The claude-desktop-config.json file is gitignored as it contains machine-specific paths.
Performance Benefits
| Metric | Traditional | Agentic (Current) | Shared Memory MCP |
|---|---|---|---|
| Token Usage | 4K | 48K+ | 8K |
| Performance Gain | 1x | 1.9x | 1.9x |
| Cost Efficiency | 100% | 12% | 1200% |
| Coordination | None | Poor | Excellent |
License
MIT
เซิร์ฟเวอร์ที่เกี่ยวข้อง
Kone.vc
ผู้สนับสนุนMonetize your AI agent with contextual product recommendations
memory-mcp-1file
🏠 🍎 🪟 🐧 - A self-contained Memory server with single-binary architecture (embedded DB & models, no dependencies). Provides persistent semantic and graph-based memory for AI agents.
Redmine
An MCP server for interacting with the Redmine project management system.
kObsidian
Filesystem first MCP server for Obsidian vaults with an LLM-Wiki layer on top.
Time MCP Server
Provides current time and timezone conversion capabilities for LLMs, using IANA timezone names.
Multi-Model Advisor
Queries multiple Ollama models to combine their responses, offering diverse AI perspectives on a single question.
UNO: Unified Narrative Operator
A text enhancement tool that transforms story content into rich, detailed narratives using advanced literary techniques and heuristic analysis.
Panda Odoo
An MCP server for integrating with the Odoo ERP system.
Routine
MCP server to interact with Routine: calendars, tasks, notes, etc.
Careflow-MCP
Production-ready healthcare workflow automation powered by n8n and the Model Context Protocol. Enables Claude and other AI assistants to trigger HIPAA-compliant patient task management workflows through natural language.
Gemini Data Analysis & Research
Leverages Google's Gemini AI for data analysis, research paper generation, and automated email delivery.