A server for Zero-Vector's hybrid vector-graph persona and memory management system, featuring advanced LangGraph workflow capabilities.
š STATUS: PRODUCTION READY | Infrastructure: VALIDATED | LangGraph: OPERATIONAL
A complete AI persona memory management system combining a high-performance hybrid vector-graph database server with advanced LangGraph workflow orchestration and a Model Context Protocol (MCP) interface for sophisticated AI memory, relationship understanding, and multi-agent coordination.
š GitHub Repository: https://github.com/MushroomFleet/zero-vector-MCP
Last Infrastructure Validation: June 21, 2025
Current Version: v3.0 Production-Ready
Zero-Vector v2 Server: ā
Active (Port 3000)
Zero-Vector v3 LangGraph: ā
Active (Port 3001)
MCP Server: ā
Active (Multi-server coordination)
Zero-Vector MCP v3.0 provides a production-ready hybrid vector-graph solution with advanced LangGraph workflow capabilities for AI persona management, featuring:
graph TB
subgraph "AI Development Environment"
A[Cline AI Assistant] --> B[MCP Client]
end
subgraph "Zero-Vector MCP v3.0 System"
B --> C[MCP Server v3.0]
C --> D[Zero-Vector v2 API]
C --> E[Zero-Vector v3 API]
subgraph "Zero-Vector v2 (Original System)"
D --> F[Hybrid Vector Store]
D --> G[Graph Database]
D --> H[SQLite Metadata]
end
subgraph "Zero-Vector v3 (LangGraph System)"
E --> I[LangGraph Workflow Engine]
I --> J[Multi-Agent Orchestration]
I --> K[Human Approval Service]
I --> L[Performance Cache Manager]
I --> M[State Management]
end
subgraph "v3.0 Core Services"
N[Hybrid Memory Manager]
O[Entity Extractor]
P[Graph Service]
Q[Embedding Service]
R[Feature Flags]
S[Workflow Manager]
T[Approval Service]
end
D --> N
D --> O
D --> P
D --> Q
D --> R
E --> S
E --> T
subgraph "LangGraph Agents"
U[Hybrid Retrieval Agent]
V[Persona Memory Agent]
W[Multi-Step Reasoning Agent]
X[Human Approval Agent]
end
J --> U
J --> V
J --> W
J --> X
subgraph "Knowledge Graph"
Y[Entities]
Z[Relationships]
AA[Graph Traversal]
end
G --> Y
G --> Z
G --> AA
end
subgraph "External Services"
BB[OpenAI Embeddings]
CC[Local Transformers]
DD[PostgreSQL Checkpointer]
EE[Redis Cache]
end
Q --> BB
Q --> CC
M --> DD
L --> EE
style A fill:#e1f5fe
style C fill:#f3e5f5
style F fill:#e8f5e8
style G fill:#ffeb3b
style I fill:#e3f2fd
style J fill:#e8f5e8
style N fill:#fff3e0
style O fill:#e1f5fe
style P fill:#e1f5fe
style Q fill:#fff3e0
style R fill:#f3e5f5
style S fill:#e3f2fd
style T fill:#ffcdd2
# Clone the repository
git clone https://github.com/MushroomFleet/zero-vector-3.git
cd zero-vector-3
# 1. Set up the Zero-Vector v2 server (Original System)
cd zero-vector/server
npm install
npm run setup:database
npm run generate:api-key # Generate API key for MCP
cp env.example .env # Add your OpenAI API key
npm start # Runs on port 3000
# 2. Set up the Zero-Vector v3 server (LangGraph System)
cd zero-vector-3/server
npm install
cp env.example .env # Configure environment
npm run setup:postgres # install postgre16.9 & redis first
npm run setup:infrastructure # Setup PostgreSQL and Redis
npm start # Runs on port 3001
# 3. Set up the MCP server (in a new terminal)
cd MCP
npm install
cp env.example .env
# Edit .env with both server URLs and API keys
npm start
# Test the vector database (v2)
curl http://localhost:3000/health
# Test the LangGraph system (v3)
curl http://localhost:3001/health
# Test MCP server connection (both systems)
cd MCP
npm run test:connection
This system consists of three main components, each with detailed documentation:
Location: zero-vector/README.md
The core vector database server providing:
Location: zero-vector-3/README.md
The advanced LangGraph workflow server providing:
Location: MCP/README.md
The Model Context Protocol interface providing:
create_persona
, list_personas
, get_persona
, update_persona
, delete_persona
add_memory
, search_persona_memories
, get_full_memory
, add_conversation
, get_conversation_history
, cleanup_persona_memories
explore_knowledge_graph
, hybrid_memory_search
, get_graph_context
, get_graph_stats
execute_workflow
, get_workflow_status
, resume_workflow
, cancel_workflow
, list_active_workflows
, get_workflow_metrics
get_system_health
, get_persona_stats
, test_connection
// Execute a basic conversation workflow
const workflowResult = await mcpClient.executeWorkflow({
query: "Explain machine learning fundamentals for beginners",
persona: "helpful_assistant",
user_id: "user123",
workflow_type: "zero_vector_conversation",
config: {
enable_approval: false,
cache_enabled: true,
confidence_threshold: 0.8,
max_reasoning_steps: 5
}
});
// Monitor workflow execution
const status = await mcpClient.getWorkflowStatus({
workflow_id: workflowResult.workflow_id,
thread_id: workflowResult.thread_id,
include_metadata: true
});
// Execute complex reasoning workflow
const complexWorkflow = await mcpClient.executeWorkflow({
query: "Compare machine learning approaches for NLP and recommend the best approach for a chatbot system, considering scalability, cost, and performance",
persona: "technical_expert",
user_id: "user456",
workflow_type: "multi_step_reasoning",
config: {
enable_approval: true,
max_reasoning_steps: 10,
confidence_threshold: 0.9,
enable_memory_maintenance: true
},
thread_id: "conversation_abc123"
});
// Resume after human approval
const resumeResult = await mcpClient.resumeWorkflow({
thread_id: "conversation_abc123",
workflow_id: complexWorkflow.workflow_id,
approval_result: {
approved: true,
feedback: "Add more details about implementation complexity",
modifications: {
add_implementation_details: true,
focus_areas: ["complexity", "scalability", "cost"]
}
}
});
// Get comprehensive workflow metrics
const metrics = await mcpClient.getWorkflowMetrics({
time_range: "24h",
workflow_type: "multi_step_reasoning",
user_id: "user123",
include_detailed: true
});
// List active workflows for monitoring
const activeWorkflows = await mcpClient.listActiveWorkflows({
user_id: "user123",
status: "running",
workflow_type: "multi_step_reasoning",
limit: 10
});
// Cancel workflow if needed
await mcpClient.cancelWorkflow({
workflow_id: "workflow_xyz789",
thread_id: "conversation_abc123",
reason: "User requested cancellation due to changed requirements"
});
// Create a persona for an AI assistant
const persona = await mcpClient.createPersona({
name: "Technical Assistant",
description: "Helpful coding assistant with memory and graph knowledge",
systemPrompt: "You are a helpful technical assistant with access to knowledge graphs...",
maxMemorySize: 1000
});
// Add important information to memory (automatically extracts entities)
await mcpClient.addMemory({
personaId: persona.id,
content: "John Smith from Microsoft called about the Azure project. He mentioned working with Sarah Johnson on cloud architecture.",
type: "conversation",
importance: 0.8
});
// Hybrid search combining vector similarity with graph expansion
const hybridResults = await mcpClient.hybridMemorySearch({
personaId: persona.id,
query: "cloud architecture project",
limit: 5,
useGraphExpansion: true,
graphDepth: 2,
threshold: 0.7
});
// Use workflow to process complex queries with memory integration
const workflowWithMemory = await mcpClient.executeWorkflow({
query: "Based on my previous conversations about Azure, what are the key architectural considerations for our project?",
persona: persona.id,
user_id: "user123",
workflow_type: "zero_vector_conversation",
config: {
use_memory_integration: true,
enable_graph_expansion: true,
memory_context_depth: 3
}
});
// Get comprehensive context for specific entities
const context = await mcpClient.getGraphContext({
personaId: persona.id,
entityIds: ["entity-john-smith-uuid", "entity-microsoft-uuid"],
includeRelationships: true,
maxDepth: 2
});
// Search with configurable content display
const searchWithFullContent = await mcpClient.searchPersonaMemories({
personaId: persona.id,
query: "white hat tales",
limit: 5,
show_full_content: true, // No truncation
threshold: 0.3
});
{
"mcpServers": {
"zero-vector-v3": {
"command": "node",
"args": ["C:/path/to/zero-vector-MCP/MCP/src/index.js"],
"env": {
"ZERO_VECTOR_BASE_URL": "http://localhost:3000",
"ZERO_VECTOR_API_KEY": "your_zero_vector_2_api_key",
"ZERO_VECTOR_V3_BASE_URL": "http://localhost:3001",
"ZERO_VECTOR_V3_API_KEY": "your_zero_vector_3_api_key"
}
}
}
}
zero-vector-MCP/
āāā zero-vector/ # Vector database server (v2)
ā āāā server/ # Node.js backend
ā ā āāā src/ # Source code
ā ā āāā scripts/ # Setup scripts
ā ā āāā data/ # Database files
ā ā āāā README.md # Server documentation
ā āāā README.md # Server overview
āāā zero-vector-3/ # LangGraph workflow server (v3)
ā āāā server/ # Node.js backend
ā ā āāā src/ # Source code
ā ā ā āāā agents/ # LangGraph agents
ā ā ā āāā graphs/ # Workflow graphs
ā ā ā āāā services/ # Core services
ā ā ā āāā state/ # State management
ā ā āāā scripts/ # Setup scripts
ā ā āāā README.md # v3 documentation
ā āāā README.md # v3 overview
āāā MCP/ # Model Context Protocol server
ā āāā src/ # MCP server source
ā ā āāā tools/ # MCP tool implementations
ā ā ā āāā workflows.js # NEW: Workflow tools
ā ā ā āāā personas.js # Persona management
ā ā ā āāā memories.js # Memory operations
ā ā ā āāā graph.js # Graph operations
ā ā ā āāā utilities.js # System utilities
ā ā āāā utils/ # Utilities
ā āāā .env.example # Environment template
ā āāā README.md # MCP documentation
āāā DOCS/ # Internal documentation
āāā README.md # This file
# Start Zero-Vector v2 server in development mode
cd zero-vector/server
npm run dev # Port 3000
# Start Zero-Vector v3 server in development mode (new terminal)
cd zero-vector-3/server
npm run dev # Port 3001
# Start MCP server in development mode (new terminal)
cd MCP
npm run dev
# Run tests
npm test
Zero-Vector v2 Server:
NODE_ENV=development
PORT=3000
MAX_MEMORY_MB=2048
DEFAULT_DIMENSIONS=1536
LOG_LEVEL=info
Zero-Vector v3 Server:
NODE_ENV=development
PORT=3001
POSTGRES_URL=postgresql://localhost:5432/zerovector3
REDIS_URL=redis://localhost:6379/0
LANGSMITH_TRACING=true
OPENAI_API_KEY=your_openai_key
LOG_LEVEL=info
MCP Server:
# v2 Server Configuration
ZERO_VECTOR_BASE_URL=http://localhost:3000
ZERO_VECTOR_API_KEY=your_zero_vector_2_api_key
# v3 Server Configuration
ZERO_VECTOR_V3_BASE_URL=http://localhost:3001
ZERO_VECTOR_V3_API_KEY=your_zero_vector_3_api_key
# MCP Configuration
MCP_SERVER_NAME=zero-vector-mcp-v3
LOG_LEVEL=info
git checkout -b feature/amazing-feature
)git commit -m 'Add amazing feature'
)git push origin feature/amazing-feature
)This project is licensed under the MIT License - see the LICENSE file for details.
zero-vector/README.md
for detailed v2 server documentationzero-vector-3/README.md
for v3 workflow documentationMCP/README.md
for MCP setup and tool documentationConnection Issues:
# Check Zero-Vector v2 server health
curl http://localhost:3000/health
# Check Zero-Vector v3 server health
curl http://localhost:3001/health
# Test MCP server connection
cd MCP && npm run test:connection
Workflow Issues:
# Check workflow status
cd MCP && node -e "console.log('Check workflow metrics via MCP tools')"
# Review workflow logs
tail -f zero-vector-3/server/logs/combined.log
Common Issues:
.env
fileZero-Vector MCP v3.0 - Production-ready hybrid vector-graph AI memory system with advanced LangGraph workflow orchestration and multi-agent intelligence
A starter project for building Model Context Protocol (MCP) servers with the mcp-framework.
Turns any Swagger/OpenAPI REST endpoint with a yaml/json definition into an MCP Server with Langchain/Langflow integration automatically.
An MCP server and client implementation for EdgeOne Pages Functions, supporting OpenAI-formatted requests.
A customizable MCP service with flexible tool selection and configuration. Requires a 302AI API key.
ALAPI MCP Tools,Call hundreds of API interfaces via MCP
Official MCP server for Buildable AI-powered development platform. Enables AI assistants to manage tasks, track progress, get project context, and collaborate with humans on software projects.
Interact with Homebrew (the package manager for macOS and Linux) using natural language commands.
Enable AI Agents to fix build failures from CircleCI.
A local-first code indexer that enhances LLMs with deep code understanding. It integrates with AI assistants via the Model Context Protocol (MCP) and supports AI-powered semantic search.
Manage ServiceNow metadata, modules, records, and tests using Fluent, a TypeScript-based declarative DSL. Supports all ServiceNow SDK CLI commands.