memory-mcp-1file

🏠 🍎 🪟 🐧 - A self-contained Memory server with single-binary architecture (embedded DB & models, no dependencies). Provides persistent semantic and graph-based memory for AI agents.

🧠 Memory MCP Server

Release Docker License: MIT Built with Rust Architecture

A high-performance, pure Rust Model Context Protocol (MCP) server that provides persistent, semantic, and graph-based memory for AI agents.

Works perfectly with:

  • Claude Desktop
  • Claude Code (CLI)
  • Cursor
  • OpenCode
  • Cline / Roo Code
  • Any other MCP-compliant client.

🏆 The "All-in-One" Advantage

Unlike other memory solutions that require a complex stack (Python + Vector DB + Graph DB), this project is a single, self-contained executable.

  • No External Database (SurrealDB is embedded)
  • No Python Dependencies (Embedding models run via embedded ONNX runtime)
  • No API Keys Required (All models run locally on CPU)
  • Zero Setup (Just run one Docker container or binary)

It combines:

  1. Vector Search (FastEmbed) for semantic similarity.
  2. Knowledge Graph (PetGraph) for entity relationships.
  3. Code Indexing for understanding your codebase.
  4. Hybrid Retrieval (Reciprocal Rank Fusion) for best results.

🏗️ Architecture

graph TD
    User[AI Agent / IDE]
    
    subgraph "Memory MCP Server"
        MS[MCP Server]
        
        subgraph "Core Engines"
            ES[Embedding Service]
            GS[Graph Service]
            CS[Codebase Service]
        end
        
        MS -- "Store / Search" --> ES
        MS -- "Relate Entities" --> GS
        MS -- "Index" --> CS
        
        ES -- "Vectorize Text" --> SDB[(SurrealDB Embedded)]
        GS -- "Knowledge Graph" --> SDB
        CS -- "AST Chunks" --> SDB
    end

    User -- "MCP Protocol" --> MS

Click here for the Detailed Architecture Documentation


🤖 Agent Integration (System Prompt)

Memory is useless if your agent doesn't check it. To get the "Long-Term Memory" effect, you must instruct your agent to follow a strict protocol.

We provide a battle-tested Memory Protocol (AGENTS.md) that you can adapt.

🛡️ Core Workflows (Context Protection)

The protocol implements specific flows to handle Context Window Compaction and Session Restarts:

  1. 🚀 Session Startup: The agent must search for TASK: in_progress immediately. This restores the full context of what was happening before the last session ended or the context was compacted.
  2. ⏳ Auto-Continue: A safety mechanism where the agent presents the found task to the user and waits (or auto-continues), ensuring it doesn't hallucinate a new task.
  3. 🔄 Triple Sync: Updates Memory, Todo List, and Files simultaneously. If one fails (e.g., context lost), the others serve as backups.
  4. 🧱 Prefix System: All memories use prefixes (TASK:, DECISION:, RESEARCH:) so semantic search can precisely target the right type of information, reducing noise.

These workflows turn the agent from a "stateless chatbot" into a "stateful worker" that survives restarts and context clearing.

Recommended System Prompt Snippet

Instead of scattering instructions across IDE-specific files (like .cursorrules), establish AGENTS.md as the Single Source of Truth.

Instruct your agent (in its base system prompt) to:

  1. Read AGENTS.md at the start of every session.
  2. Follow the protocols defined therein.

Here is a minimal reference prompt to bootstrap this behavior:

# 🧠 Memory & Protocol
You have access to a persistent memory server and a protocol definition file.

1.  **Protocol Adherence**:
    - READ `AGENTS.md` immediately upon starting.
    - Strictly follow the "Session Startup" and "Sync" protocols defined there.

2.  **Context Restoration**:
    - Run `search_text("TASK: in_progress")` to restore context.
    - Do NOT ask the user "what should I do?" if a task is already in progress.

Why this matters?

Without this protocol, the agent loses context after compaction or session restarts. With this protocol, it maintains the full context of the current task, ensuring no steps or details are lost, even when the chat history is cleared.


🔌 Client Configuration

Universal Docker Configuration (Any IDE/CLI)

To use this MCP server with any client (Claude Code, OpenCode, Cline, etc.), use the following Docker command structure.

Key Requirements:

  1. Memory Volume: -v mcp-data:/data (Persists your graph and embeddings)
  2. Project Volume: -v $(pwd):/project:ro (Allows the server to read and index your code)
  3. Init Process: --init (Ensures the server shuts down cleanly)

JSON Configuration (Claude Desktop, etc.)

Add this to your configuration file (e.g., claude_desktop_config.json):

{
  "mcpServers": {
    "memory": {
      "command": "docker",
      "args": [
        "run",
        "--init",
        "-i",
        "--rm",
        "-v", "mcp-data:/data",
        "-v", "/absolute/path/to/your/project:/project:ro",
        "ghcr.io/pomazanbohdan/memory-mcp-1file:latest"
      ]
    }
  }
}

Note: Replace /absolute/path/to/your/project with the actual path you want to index. In some environments (like Cursor or VSCode extensions), you might be able to use variables like ${workspaceFolder}, but absolute paths are most reliable for Docker.

Cursor (Specific Instructions)

  1. Go to Cursor Settings > Features > MCP Servers.
  2. Click + Add New MCP Server.
  3. Type: stdio
  4. Name: memory
  5. Command:
    docker run --init -i --rm -v mcp-data:/data -v "/Users/yourname/projects/current:/project:ro" ghcr.io/pomazanbohdan/memory-mcp-1file:latest
    
    (Remember to update the project path when switching workspaces if you need code indexing)

OpenCode / CLI

docker run --init -i --rm \
  -v mcp-data:/data \
  -v $(pwd):/project:ro \
  ghcr.io/pomazanbohdan/memory-mcp-1file:latest

✨ Key Features

  • Semantic Memory: Stores text with vector embeddings (e5-small by default) for "vibe-based" retrieval.
  • Graph Memory: Tracks entities (User, Project, Tech) and their relations (uses, likes). Supports PageRank-based traversal.
  • Code Intelligence: Indexes local project directories (AST-based chunking) to answer questions about your code.
  • Temporal Validity: Memories can have valid_from and valid_until dates.
  • SurrealDB Backend: Fast, embedded, single-file database.

🛠️ Tools Available

The server exposes 26 tools to the AI model, organized into logical categories.

🧠 Core Memory Management

ToolDescription
store_memoryStore a new memory with content and optional metadata.
update_memoryUpdate an existing memory (only provided fields).
delete_memoryDelete a memory by its ID.
list_memoriesList memories with pagination (newest first).
get_memoryGet a specific memory by ID.
invalidateSoft-delete a memory (mark as invalid).
get_validGet currently active memories (filters out expired ones).
get_valid_atGet memories that were valid at a specific past timestamp.

🔎 Search & Retrieval

ToolDescription
recallHybrid search (Vector + Keyword + Graph). Best for general questions.
searchPure semantic vector search.
search_textExact keyword match (BM25).

🕸️ Knowledge Graph

ToolDescription
create_entityDefine a node (e.g., "React", "Authentication").
create_relationLink nodes (e.g., "Project" -> "uses" -> "React").
get_relatedFind connected concepts via graph traversal.
detect_communitiesDetect communities in the graph using Leiden algorithm.

💻 Codebase Intelligence

ToolDescription
index_projectScan and index a local folder for code search.
get_index_statusCheck if indexing is in progress or failed.
list_projectsList all indexed projects.
delete_projectRemove a project and its code chunks from the index.
search_codeSemantic search over code chunks.
search_symbolsSearch for functions/classes by name.
get_callersFind functions that call a given symbol.
get_calleesFind functions called by a given symbol.
get_related_symbolsGet related symbols via graph traversal.

⚙️ System & Maintenance

ToolDescription
get_statusGet server health and loading status.
reset_all_memoryDANGER: Wipes all data (memories, graph, code).

⚙️ Configuration

Environment variables or CLI args:

ArgEnvDefaultDescription
--data-dirDATA_DIR./dataDB location
--modelEMBEDDING_MODELe5_multiEmbedding model (e5_small, e5_multi, nomic, bge_m3)
--log-levelLOG_LEVELinfoVerbosity

🧠 Available Models

You can switch the embedding model using the --model arg or EMBEDDING_MODEL env var.

Argument ValueHuggingFace RepoDimensionsSizeUse Case
e5_smallintfloat/multilingual-e5-small384134 MBFastest, minimal RAM. Good for dev/testing.
e5_multiintfloat/multilingual-e5-base7681.1 GBDefault. Best balance of quality/speed.
nomicnomic-ai/nomic-embed-text-v1.57681.9 GBHigh quality long-context embeddings.
bge_m3BAAI/bge-m310242.3 GBState-of-the-art multilingual quality. Heavy.

[!WARNING] Changing Models & Data Compatibility

If you switch to a model with different dimensions (e.g., from e5_small to e5_multi), your existing database will be incompatible. You must delete the data directory (volume) and re-index your data.

Switching between models with the same dimensions (e.g., e5_multi <-> nomic) is theoretically possible but not recommended as semantic spaces differ.

🔮 Future Roadmap (Research & Ideas)

Based on analysis of advanced memory systems like Hindsight (see their documentation for details on these mechanisms), we are exploring these "Cognitive Architecture" features for future releases:

1. Meta-Cognitive Reflection (Consolidation)

  • Problem: Raw memories accumulate noise over time (e.g., 10 separate memories about fixing the same bug).
  • Solution: Implement a reflect background process (or tool) that periodicallly scans recent memories to:
    • De-duplicate redundant entries.
    • Resolve conflicts (if two memories contradict, keep the newer one or flag for review).
    • Synthesize low-level facts into high-level "Insights" (e.g., "User prefers Rust over Python" derived from 5 code choices).

2. Temporal Decay & "Presence"

  • Problem: Old memories can sometimes drown out current context in semantic search.
  • Solution: Integrate Time Decay into the Reciprocal Rank Fusion (RRF) algorithm.
    • Give a calculated boost to recent memories for queries implying "current state".
    • Allow the agent to prioritize "working memory" over "historical archives" dynamically.

3. Namespaced Memory Banks

  • Problem: Running one docker container per project is resource-heavy.
  • Solution: Add support for namespace or project_id scoping.
    • Allows a single server instance to host isolated "Memory Banks" for different projects or agent personas.
    • Enables "Switching Context" without restarting the container.

4. Epistemic Confidence Scoring

  • Problem: The agent treats a guess the same as a verified fact.
  • Solution: Add a confidence score (0.0 - 1.0) to memory schemas.
    • Allows storing hypotheses ("I think the bug is in auth.rs", confidence: 0.3).
    • Retrieval tools can filter out low-confidence memories when answering factual questions.

License

MIT

Related Servers