memory-mcp-1file

🏠 🍎 🪟 🐧 - A self-contained Memory server with single-binary architecture (embedded DB & models, no dependencies). Provides persistent semantic and graph-based memory for AI agents.

🧠 Memory MCP Server

Release Docker License: MIT Built with Rust Architecture

A high-performance, pure Rust Model Context Protocol (MCP) server that provides persistent, semantic, and graph-based memory for AI agents.

Works perfectly with:

  • Claude Desktop
  • Claude Code (CLI)
  • Gemini CLI
  • Cursor
  • OpenCode
  • Cline / Roo Code
  • Any other MCP-compliant client.

🏆 The "All-in-One" Advantage

Unlike other memory solutions that require a complex stack (Python + Vector DB + Graph DB), this project is a single, self-contained executable.

  • No External Database (SurrealDB is embedded)
  • No API Keys, No Cloud, No Python — Everything runs 100% locally via an embedded ONNX runtime. The embedding model is baked into the binary and runs on CPU. Nothing leaves your machine.
  • Zero Setup (Just run one Docker container or binary)

It combines:

  1. Vector Search (FastEmbed) for semantic similarity.
  2. Knowledge Graph (PetGraph) for entity relationships.
  3. Code Indexing with symbol graph (calls, extends, implements) for deep codebase understanding.
  4. Hybrid Retrieval (Reciprocal Rank Fusion) for best results.

🏗️ Architecture

graph TD
    User[AI Agent / IDE]
    
    subgraph "Memory MCP Server"
        MS[MCP Server]
        
        subgraph "Core Engines"
            ES[Embedding Service]
            GS[Graph Service]
            CS[Codebase Service]
        end
        
        MS -- "Store / Search" --> ES
        MS -- "Relate Entities" --> GS
        MS -- "Index" --> CS
        
        ES -- "Vectorize Text" --> SDB[(SurrealDB Embedded)]
        GS -- "Knowledge Graph" --> SDB
        CS -- "AST Chunks" --> SDB
    end

    User -- "MCP Protocol" --> MS

Click here for the Detailed Architecture Documentation


🤖 Agent Integration (System Prompt)

Memory is useless if your agent doesn't check it. To get the "Long-Term Memory" effect, you must instruct your agent to follow a strict protocol.

We provide a battle-tested Memory Protocol (AGENTS.md) that you can adapt.

🛡️ Core Workflows (Context Protection)

The protocol implements specific flows to handle Context Window Compaction and Session Restarts:

  1. 🚀 Session Startup: The agent must search for TASK: in_progress immediately. This restores the full context of what was happening before the last session ended or the context was compacted.
  2. ⏳ Auto-Continue: A safety mechanism where the agent presents the found task to the user and waits (or auto-continues), ensuring it doesn't hallucinate a new task.
  3. 🔄 Triple Sync: Updates Memory, Todo List, and Files simultaneously. If one fails (e.g., context lost), the others serve as backups.
  4. 🧱 Prefix System: All memories use prefixes (TASK:, DECISION:, RESEARCH:) so semantic search can precisely target the right type of information, reducing noise.

These workflows turn the agent from a "stateless chatbot" into a "stateful worker" that survives restarts and context clearing.

Recommended System Prompt Snippet

Instead of scattering instructions across IDE-specific files (like .cursorrules), establish AGENTS.md as the Single Source of Truth.

Instruct your agent (in its base system prompt) to:

  1. Read AGENTS.md at the start of every session.
  2. Follow the protocols defined therein.

Here is a minimal reference prompt to bootstrap this behavior:

# 🧠 Memory & Protocol
You have access to a persistent memory server and a protocol definition file.

1.  **Protocol Adherence**:
    - READ `AGENTS.md` immediately upon starting.
    - Strictly follow the "Session Startup" and "Sync" protocols defined there.

2.  **Context Restoration**:
    - Run `search_text("TASK: in_progress")` to restore context.
    - Do NOT ask the user "what should I do?" if a task is already in progress.

Why this matters?

Without this protocol, the agent loses context after compaction or session restarts. With this protocol, it maintains the full context of the current task, ensuring no steps or details are lost, even when the chat history is cleared.


🔌 Client Configuration

Universal Docker Configuration (Any IDE/CLI)

To use this MCP server with any client (Claude Code, OpenCode, Cline, etc.), use the following Docker command structure.

Key Requirements:

  1. Memory Volume: -v mcp-data:/data (Persists your graph, embeddings, and cached model weights)
  2. Project Volume: -v $(pwd):/project:ro (Allows the server to read and index your code)
  3. Init Process: --init (Ensures the server shuts down cleanly)

[!TIP] One volume persists everything: The single -v mcp-data:/data mount covers both the SurrealDB database and the ~1.2 GB embedding model (stored under /data/models/). There is no need for a separate volume for /data/models — it is already a subdirectory of /data and is preserved automatically. Without a named volume, Docker creates a new anonymous volume on each docker run, causing the model to re-download (~1.2 GB) every time.

JSON Configuration (Claude Desktop, etc.)

Add this to your configuration file (e.g., claude_desktop_config.json):

{
  "mcpServers": {
    "memory": {
      "command": "docker",
      "args": [
        "run",
        "--init",
        "-i",
        "--rm",
        "--memory=3g",
        "-v", "mcp-data:/data",
        "-v", "/absolute/path/to/your/project:/project:ro",
        "ghcr.io/pomazanbohdan/memory-mcp-1file:latest"
      ]
    }
  }
}

Note: Replace /absolute/path/to/your/project with the actual path you want to index. In some environments (like Cursor or VSCode extensions), you might be able to use variables like ${workspaceFolder}, but absolute paths are most reliable for Docker.

Cursor (Specific Instructions)

  1. Go to Cursor Settings > Features > MCP Servers.
  2. Click + Add New MCP Server.
  3. Type: stdio
  4. Name: memory
  5. Command:
    docker run --init -i --rm --memory=3g -v mcp-data:/data -v "/Users/yourname/projects/current:/project:ro" ghcr.io/pomazanbohdan/memory-mcp-1file:latest
    
    (Remember to update the project path when switching workspaces if you need code indexing)

OpenCode / CLI

docker run --init -i --rm --memory=3g \
  -v mcp-data:/data \
  -v $(pwd):/project:ro \
  ghcr.io/pomazanbohdan/memory-mcp-1file:latest

NPX / Bunx (No Docker required)

You can run the server directly via npx or bunx. The npm package automatically downloads the correct pre-compiled binary for your platform.

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "memory-mcp-1file"]
    }
  }
}

Claude Code (CLI)

claude mcp add memory -- npx -y memory-mcp-1file

Cursor

  1. Go to Cursor Settings > Features > MCP Servers.
  2. Click + Add New MCP Server.
  3. Type: command
  4. Name: memory
  5. Command: npx -y memory-mcp-1file

Or add to .cursor/mcp.json:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "memory-mcp-1file"]
    }
  }
}

Windsurf / VS Code

Add to your MCP settings:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "memory-mcp-1file"]
    }
  }
}

Bun

{
  "mcpServers": {
    "memory": {
      "command": "bunx",
      "args": ["memory-mcp-1file"]
    }
  }
}

Note: Unlike Docker, npx/bunx runs the binary locally — it already has access to your filesystem, so no directory mounting is needed. To customize the data storage path, pass --data-dir via args:

"args": ["-y", "memory-mcp-1file", "--", "--data-dir", "/path/to/data"]

Gemini CLI

Add to your ~/.gemini/settings.json:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "memory-mcp-1file"]
    }
  }
}

Or with Docker:

{
  "mcpServers": {
    "memory": {
      "command": "docker",
      "args": [
        "run", "--init", "-i", "--rm", "--memory=3g",
        "-v", "mcp-data:/data",
        "-v", "${workspaceFolder}:/project:ro",
        "ghcr.io/pomazanbohdan/memory-mcp-1file:latest"
      ]
    }
  }
}

✨ Key Features

  • Semantic Memory: Stores text with vector embeddings (qwen3 by default) for "vibe-based" retrieval.
  • Graph Memory: Tracks entities (User, Project, Tech) and their relations (uses, likes). Supports PageRank-based traversal.
  • Code Intelligence: Indexes local project directories (AST-based chunking) for Rust, Python, TypeScript, JavaScript, Go, Java, and Dart/Flutter. Tracks calls, imports, extends, implements, and mixin relationships between symbols.
  • Temporal Validity: Memories can have valid_from and valid_until dates.
  • SurrealDB Backend: Fast, embedded, single-file database.

🛠️ Tools Available

The server exposes 18 tools to the AI model, organized into logical categories.

🧠 Core Memory Management

ToolDescription
store_memoryStore a new memory with content and optional metadata.
update_memoryUpdate memory fields.
delete_memoryDelete memory by ID.
list_memoriesList memories (newest first).
get_memoryGet full memory by ID.
invalidateSoft-delete memory, optionally linking replacement.
get_validGet valid memories. Optional timestamp (ISO 8601) for point-in-time query.

🔎 Search & Retrieval

ToolDescription
recallHybrid search (Vector + Keyword + Graph via RRF). Default for memories.
search_memorySearch memories. mode: vector (default) or bm25.

🕸️ Knowledge Graph

ToolDescription
knowledge_graphUnified KG operations. action: create_entity | create_relation | get_related | detect_communities.

💻 Codebase Intelligence

ToolDescription
index_projectIndex codebase directory for code search.
delete_projectDelete indexed project.
recall_codeCode retrieval. mode: vector or hybrid (default). Hybrid uses vector+BM25+graph fusion.
search_symbolsSearch code symbols by name.
symbol_graphNavigate symbol graph. action: callers | callees | related.
project_infoProject info. action: list | status | stats.

⚙️ System & Maintenance

ToolDescription
get_statusGet system status and startup progress.
reset_all_memoryDANGER: Reset all database data (requires confirm=true).

⚙️ Configuration

Environment variables or CLI args:

ArgEnvDefaultDescription
--data-dirDATA_DIR./dataDB location
--modelEMBEDDING_MODELqwen3Embedding model (qwen3, gemma, bge_m3, nomic, e5_multi, e5_small)
--mrl-dimMRL_DIM(native)Output dimension for MRL-supported models (e.g. 64, 128, 256, 512, 1024 for Qwen3). Defaults to the model's native maximum dimension (1024 for Qwen3).
--batch-sizeBATCH_SIZE8Maximum batch size for embedding inference
--cache-sizeCACHE_SIZE1000LRU cache capacity for embeddings
--timeoutTIMEOUT_MS30000Timeout in milliseconds
--idle-timeoutIDLE_TIMEOUT0Idle timeout in minutes. 0 = disabled
--log-levelLOG_LEVELinfoVerbosity
(None)HF_TOKEN(None)HuggingFace Token (ONLY required for gated models like gemma)
(None)EMBEDDING_QUEUE_CAPACITY256Max size of the background embedding queue
(None)EMBEDDING_BATCH_SIZE8How many files to process in one embedding chunk
(None)INDEX_BATCH_SIZE20How many files to process in one incremental chunk
(None)INDEX_DEBOUNCE_MS2000MS to wait before flushing index events (debounce)
(None)MANIFEST_DIFF_INTERVAL_MINS10Minutes between periodic missing file checks

🧠 Available Models

You can switch the embedding model using the --model arg or EMBEDDING_MODEL env var.

Argument ValueHuggingFace RepoDimensionsSizeUse Case
qwen3Qwen/Qwen3-Embedding-0.6B1024 (MRL)1.2 GBDefault. Top open-source 2026 model, 32K context, MRL support.
gemmaonnx-community/embeddinggemma-300m-ONNX768 (MRL)~195 MBLighter alternative with MRL support. (Requires proprietary license agreement)
bge_m3BAAI/bge-m310242.3 GBState-of-the-art multilingual hybrid retrieval. Heavy.
nomicnomic-ai/nomic-embed-text-v1.57681.9 GBHigh quality long-context BERT-compatible.
e5_multiintfloat/multilingual-e5-base7681.1 GBLegacy; kept for backward compatibility.
e5_smallintfloat/multilingual-e5-small384134 MBFastest, minimal RAM. Good for dev/testing.

📉 Matryoshka Representation Learning (MRL)

Models marked with (MRL) support dynamically truncating the output embedding vector to a smaller dimension (e.g., 512, 256, 128) with minimal loss of accuracy. This saves database storage and speeds up vector search.

Use the --mrl-dim argument to specify the desired size. If omitted, the default is the model's native base dimension (e.g., 1024 for Qwen3).

Warning: Once your database is created with a specific dimension, you cannot change it without wiping the data directory.

🔒 Gated Models & Authentication (Gemma)

By default, the server uses Qwen3, which is fully open-source and downloads automatically without any authentication.

However, if you choose to use Gemma (--model gemma), you must authenticate because it is a "Gated Model" with a proprietary license.

To use Gemma:

  1. Go to google/embeddinggemma-300m on Hugging Face.
  2. Log in and click "Agree to access repository".
  3. Generate an Access Token at HF Tokens (Read access is enough).
  4. Start the server with the token:
# Using environment variable
HF_TOKEN="hf_your_token_here" memory-mcp --model gemma

# Or via .env file (see .env.example)

[!WARNING] Changing Models & Data Compatibility

If you switch to a model with different dimensions (e.g., from e5_small to e5_multi), your existing database will be incompatible. You must delete the data directory (volume) and re-index your data.

Switching between models with the same dimensions (e.g., e5_multi <-> nomic) is theoretically possible but not recommended as semantic spaces differ.

🔮 Future Roadmap (Research & Ideas)

Based on analysis of advanced memory systems like Hindsight (see their documentation for details on these mechanisms), we are exploring these "Cognitive Architecture" features for future releases:

1. Meta-Cognitive Reflection (Consolidation)

  • Problem: Raw memories accumulate noise over time (e.g., 10 separate memories about fixing the same bug).
  • Solution: Implement a reflect background process (or tool) that periodicallly scans recent memories to:
    • De-duplicate redundant entries.
    • Resolve conflicts (if two memories contradict, keep the newer one or flag for review).
    • Synthesize low-level facts into high-level "Insights" (e.g., "User prefers Rust over Python" derived from 5 code choices).

2. Temporal Decay & "Presence"

  • Problem: Old memories can sometimes drown out current context in semantic search.
  • Solution: Integrate Time Decay into the Reciprocal Rank Fusion (RRF) algorithm.
    • Give a calculated boost to recent memories for queries implying "current state".
    • Allow the agent to prioritize "working memory" over "historical archives" dynamically.

3. Namespaced Memory Banks

  • Problem: Running one docker container per project is resource-heavy.
  • Solution: Add support for namespace or project_id scoping.
    • Allows a single server instance to host isolated "Memory Banks" for different projects or agent personas.
    • Enables "Switching Context" without restarting the container.

4. Epistemic Confidence Scoring

  • Problem: The agent treats a guess the same as a verified fact.
  • Solution: Add a confidence score (0.0 - 1.0) to memory schemas.
    • Allows storing hypotheses ("I think the bug is in auth.rs", confidence: 0.3).
    • Retrieval tools can filter out low-confidence memories when answering factual questions.

License

MIT

Related Servers