SuperLocalMemory V2

Universal, local-first persistent memory for AI assistants. SQLite-based knowledge graph with zero cloud dependencies. Works with 17+ tools (Claude, Cursor, Windsurf, VS Code, etc.). 100% free forever.


NEW: v2.5 — "Your AI Memory Has a Heartbeat"

SuperLocalMemory is no longer passive storage — it's a real-time coordination layer.

What's NewWhy It Matters
Real-Time Event StreamSee every memory operation live in the dashboard — no refresh needed. SSE-powered, cross-process.
No More "Database Locked"WAL mode + serialized write queue. 50 concurrent agents writing? Zero errors.
Agent TrackingKnow exactly which AI tool wrote what. Claude, Cursor, Windsurf, CLI — all tracked automatically.
Trust ScoringBayesian trust signals detect spam, quick-deletes, and cross-agent validation. Silent in v2.5, enforced in v2.6.
Memory ProvenanceEvery memory records who created it, via which protocol, with full derivation lineage.
Production-Grade Code28 API endpoints across 8 modular route files. 13 modular JS files. 63 pytest tests.

Upgrade: npm install -g superlocalmemory@latest

Dashboard: python3 ~/.claude-memory/ui_server.py then open http://localhost:8765

Interactive Architecture Diagram | Architecture Doc | Full Changelog


NEW: Framework Integrations (v2.5.1)

Use SuperLocalMemory as a memory backend in your LangChain and LlamaIndex applications — 100% local, zero cloud.

LangChain

pip install langchain-superlocalmemory
from langchain_superlocalmemory import SuperLocalMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

history = SuperLocalMemoryChatMessageHistory(session_id="my-session")
# Messages persist across sessions, stored locally in ~/.claude-memory/memory.db

LlamaIndex

pip install llama-index-storage-chat-store-superlocalmemory
from llama_index.storage.chat_store.superlocalmemory import SuperLocalMemoryChatStore
from llama_index.core.memory import ChatMemoryBuffer

chat_store = SuperLocalMemoryChatStore()
memory = ChatMemoryBuffer.from_defaults(chat_store=chat_store, chat_store_key="user-1")

LangChain Guide | LlamaIndex Guide


Install in One Command

npm install -g superlocalmemory

Or clone manually:

git clone https://github.com/varun369/SuperLocalMemoryV2.git && cd SuperLocalMemoryV2 && ./install.sh

Both methods auto-detect and configure 16+ IDEs and AI tools — Cursor, VS Code/Copilot, Codex, Claude, Windsurf, Gemini CLI, JetBrains, and more.


The Problem

Every time you start a new Claude session:

You: "Remember that authentication bug we fixed last week?"
Claude: "I don't have access to previous conversations..."
You: *sighs and explains everything again*

AI assistants forget everything between sessions. You waste time re-explaining your:

  • Project architecture
  • Coding preferences
  • Previous decisions
  • Debugging history

The Solution

# Install in one command
npm install -g superlocalmemory

# Save a memory
superlocalmemoryv2:remember "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

# Later, in a new session...
superlocalmemoryv2:recall "auth bug"
# ✓ Found: "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

Your AI now remembers everything. Forever. Locally. For free.


🚀 Quick Start

npm (Recommended — All Platforms)

npm install -g superlocalmemory

Mac/Linux (Manual)

git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
./install.sh

Windows (PowerShell)

git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
.\install.ps1

Verify Installation

superlocalmemoryv2:status
# ✓ Database: OK (0 memories)
# ✓ Graph: Ready
# ✓ Patterns: Ready

That's it. No Docker. No API keys. No cloud accounts. No configuration.

Updating to Latest Version

npm users:

# Update to latest version
npm update -g superlocalmemory

# Or force latest
npm install -g superlocalmemory@latest

# Install specific version
npm install -g superlocalmemory@latest

Manual install users:

cd SuperLocalMemoryV2
git pull origin main
./install.sh  # Mac/Linux
# or
.\install.ps1  # Windows

Your data is safe: Updates preserve your database and all memories.

Start the Visualization Dashboard

# Launch the interactive web UI
python3 ~/.claude-memory/ui_server.py

# Opens at http://localhost:8765
# Features: Timeline view, search explorer, graph visualization

🎨 Visualization Dashboard

NEW in v2.2.0: Interactive web-based dashboard for exploring your memories visually.

Features

FeatureDescription
📈 Timeline ViewSee your memories chronologically with importance indicators
🔍 Search ExplorerReal-time semantic search with score visualization
🕸️ Graph VisualizationInteractive knowledge graph with clusters and relationships
📊 Statistics DashboardMemory trends, tag clouds, pattern insights
🎯 Advanced FiltersFilter by tags, importance, date range, clusters

Quick Tour

# 1. Start dashboard
python ~/.claude-memory/ui_server.py

# 2. Navigate to http://localhost:8765

# 3. Explore your memories:
#    - Timeline: See memories over time
#    - Search: Find with semantic scoring
#    - Graph: Visualize relationships
#    - Stats: Analyze patterns

[[Complete Dashboard Guide →|Visualization-Dashboard]]


New in v2.4.1: Hierarchical Clustering, Community Summaries & Auto-Backup

FeatureDescription
Hierarchical LeidenRecursive community detection — clusters within clusters up to 3 levels. "Python" → "FastAPI" → "Auth patterns"
Community SummariesTF-IDF structured reports per cluster: key topics, projects, categories at a glance
MACLA ConfidenceBayesian Beta-Binomial scoring (arXiv:2512.18950) — calibrated confidence, not raw frequency
Auto-BackupConfigurable SQLite backups with retention policies, one-click restore from dashboard
Profile UICreate, switch, delete profiles from the web dashboard — full isolation per context
Profile IsolationAll API endpoints (graph, clusters, patterns, timeline) scoped to active profile

🔍 Advanced Search

SuperLocalMemory V2.2.0 implements hybrid search combining multiple strategies for maximum accuracy.

Search Strategies

StrategyMethodBest For
Semantic SearchTF-IDF vectors + cosine similarityConceptual queries ("authentication patterns")
Full-Text SearchSQLite FTS5 with rankingExact phrases ("JWT tokens expire")
Graph-EnhancedKnowledge graph traversalRelated concepts ("show auth-related")
Hybrid ModeAll three combinedGeneral queries (default)

Search Examples

# Semantic: finds conceptually similar
slm recall "security best practices"
# Matches: "JWT implementation", "OAuth flow", "CSRF protection"

# Exact: finds literal text
slm recall "PostgreSQL 15"
# Matches: exactly "PostgreSQL 15"

# Graph: finds related via clusters
slm recall "authentication" --use-graph
# Matches: JWT, OAuth, sessions (via "Auth & Security" cluster)

# Hybrid: best of all worlds (default)
slm recall "API design patterns"
# Combines semantic + exact + graph for optimal results

Measured Search Latency

Database SizeMedianP95P99
100 memories10.6ms14.9ms15.8ms
500 memories65.2ms101.7ms112.5ms
1,000 memories124.3ms190.1ms219.5ms

For typical personal databases (under 500 memories), search returns faster than you blink. Full benchmarks →


⚡ Measured Performance

All numbers measured on real hardware (Apple M4 Pro, 24GB RAM). No estimates — real benchmarks.

Search Speed

Database SizeMedian LatencyP95 Latency
100 memories10.6ms14.9ms
500 memories65.2ms101.7ms
1,000 memories124.3ms190.1ms

For typical personal use (under 500 memories), search results return faster than you blink.

Concurrent Writes — Zero Errors

ScenarioWrites/secErrors
1 AI tool writing204/sec0
2 AI tools simultaneously220/sec0
5 AI tools simultaneously130/sec0
10 AI tools simultaneously25/sec0

WAL mode + serialized write queue = zero "database is locked" errors, ever.

Storage

10,000 memories = 13.6 MB on disk (~1.9 KB per memory). Your entire AI memory history takes less space than a photo.

Trust Defense

Bayesian trust scoring achieves perfect separation (trust gap = 1.0) between honest and malicious agents. Detects "sleeper" attacks with 74.7% trust drop. Zero false positives.

Graph Construction

MemoriesBuild Time
1000.28s
1,00010.6s

Leiden clustering discovers 6-7 natural topic communities automatically.

LoCoMo benchmark results coming soon — evaluation against the standardized LoCoMo long-conversation memory benchmark (Snap Research, ACL 2024).

Full benchmark details →


🌐 Works Everywhere

SuperLocalMemory V2 is the ONLY memory system that works across ALL your tools:

Supported IDEs & Tools

ToolIntegrationHow It Works
Claude Code✅ Skills + MCP/superlocalmemoryv2:remember
Cursor✅ MCP + SkillsAI uses memory tools natively
Windsurf✅ MCP + SkillsNative memory access
Claude Desktop✅ MCPBuilt-in support
OpenAI Codex✅ MCP + SkillsAuto-configured (TOML)
VS Code / Copilot✅ MCP + Skills.vscode/mcp.json
Continue.dev✅ MCP + Skills/slm-remember
Cody✅ Custom Commands/slm-remember
Gemini CLI✅ MCP + SkillsNative MCP + skills
JetBrains IDEs✅ MCPVia AI Assistant settings
Zed Editor✅ MCPNative MCP tools
OpenCode✅ MCPNative MCP tools
Perplexity✅ MCPNative MCP tools
Antigravity✅ MCP + SkillsNative MCP tools
ChatGPT✅ MCP Connectorsearch() + fetch() via HTTP tunnel
Aider✅ Smart Wrapperaider-smart with context
Any Terminal✅ Universal CLIslm remember "content"

Three Ways to Access

  1. MCP (Model Context Protocol) - Auto-configured for Cursor, Windsurf, Claude Desktop

    • AI assistants get natural access to your memory
    • No manual commands needed
    • "Remember that we use FastAPI" just works
  2. Skills & Commands - For Claude Code, Continue.dev, Cody

    • /superlocalmemoryv2:remember in Claude Code
    • /slm-remember in Continue.dev and Cody
    • Familiar slash command interface
  3. Universal CLI - Works in any terminal or script

    • slm remember "content" - Simple, clean syntax
    • slm recall "query" - Search from anywhere
    • aider-smart - Aider with auto-context injection

All three methods use the SAME local database. No data duplication, no conflicts.

Auto-Detection

Installation automatically detects and configures:

  • Existing IDEs (Cursor, Windsurf, VS Code)
  • Installed tools (Aider, Continue, Cody)
  • Shell environment (bash, zsh)

Zero manual configuration required. It just works.

Manual Setup for Other Apps

Want to use SuperLocalMemory in ChatGPT, Perplexity, Zed, or other MCP-compatible tools?

📘 Complete setup guide: docs/MCP-MANUAL-SETUP.md

Covers:

  • ChatGPT Desktop - Add via Settings → MCP
  • Perplexity - Configure via app settings
  • Zed Editor - JSON configuration
  • Cody - VS Code/JetBrains setup
  • Custom MCP clients - Python/HTTP integration

All tools connect to the same local database - no data duplication.


💡 Why SuperLocalMemory?

For Developers Who Use AI Daily

ScenarioWithout MemoryWith SuperLocalMemory
New Claude sessionRe-explain entire projectrecall "project context" → instant context
Debugging"We tried X last week..." starts overKnowledge graph shows related past fixes
Code preferences"I prefer React..." every timePattern learning knows your style
Multi-projectContext constantly bleedsSeparate profiles per project

Built on 2026 Research

Not another simple key-value store. SuperLocalMemory implements cutting-edge memory architecture:

  • PageIndex (Meta AI) → Hierarchical memory organization
  • GraphRAG (Microsoft) → Knowledge graph with auto-clustering
  • xMemory (Stanford) → Identity pattern learning
  • A-RAG → Multi-level retrieval with context awareness

The only open-source implementation combining all four approaches.


🆚 vs Alternatives

The Hard Truth About "Free" Tiers

SolutionFree Tier LimitsPaid PriceWhat's Missing
Mem010K memories, limited APIUsage-basedNo pattern learning, not local
ZepLimited credits$50/monthCredit system, cloud-only
Supermemory1M tokens, 10K queries$19-399/moNot local, no graphs
Personal.AI❌ No free tier$33/monthCloud-only, closed ecosystem
Letta/MemGPTSelf-hosted (complex)TBDRequires significant setup
SuperLocalMemory V2Unlimited$0 foreverNothing.

Feature Comparison (What Actually Matters)

FeatureMem0ZepKhojLettaSuperLocalMemory V2
Works in CursorCloud OnlyLocal
Works in WindsurfCloud OnlyLocal
Works in VS Code3rd PartyPartialNative
Works in Claude
Works with Aider
Universal CLI
7-Layer Universal Architecture
Pattern Learning
Multi-Profile SupportPartial
Knowledge Graphs
100% LocalPartialPartial
Zero Setup
Progressive Compression
Completely FreeLimitedLimitedPartial

SuperLocalMemory V2 is the ONLY solution that:

  • ✅ Works across 16+ IDEs and CLI tools
  • ✅ Remains 100% local (no cloud dependencies)
  • ✅ Completely free with unlimited memories

See full competitive analysis →


✨ Features

Multi-Layer Memory Architecture

View Interactive Architecture Diagram — Click any layer for details, research references, and file paths.

┌─────────────────────────────────────────────────────────────┐
│  Layer 9: VISUALIZATION (NEW v2.2.0)                        │
│  Interactive dashboard: timeline, search, graph explorer    │
│  Real-time analytics and visual insights                    │
├─────────────────────────────────────────────────────────────┤
│  Layer 8: HYBRID SEARCH (NEW v2.2.0)                        │
│  Combines: Semantic + FTS5 + Graph traversal                │
│  80ms response time with maximum accuracy                   │
├─────────────────────────────────────────────────────────────┤
│  Layer 7: UNIVERSAL ACCESS                                  │
│  MCP + Skills + CLI (works everywhere)                      │
│  16+ IDEs with single database                              │
├─────────────────────────────────────────────────────────────┤
│  Layer 6: MCP INTEGRATION                                   │
│  Model Context Protocol: 6 tools, 4 resources, 2 prompts    │
│  Auto-configured for Cursor, Windsurf, Claude               │
├─────────────────────────────────────────────────────────────┤
│  Layer 5: SKILLS LAYER                                      │
│  6 universal slash-commands for AI assistants               │
│  Compatible with Claude Code, Continue, Cody                │
├─────────────────────────────────────────────────────────────┤
│  Layer 4: PATTERN LEARNING + MACLA (v2.4.0)                  │
│  Bayesian Beta-Binomial confidence (arXiv:2512.18950)       │
│  "You prefer React over Vue" (73% confidence)               │
├─────────────────────────────────────────────────────────────┤
│  Layer 3: KNOWLEDGE GRAPH + HIERARCHICAL LEIDEN (v2.4.1)    │
│  Recursive clustering: "Python" → "FastAPI" → "Auth"        │
│  Community summaries + TF-IDF structured reports            │
├─────────────────────────────────────────────────────────────┤
│  Layer 2: HIERARCHICAL INDEX                                │
│  Tree structure for fast navigation                         │
│  O(log n) lookups instead of O(n) scans                     │
├─────────────────────────────────────────────────────────────┤
│  Layer 1: RAW STORAGE                                       │
│  SQLite + Full-text search + TF-IDF vectors                 │
│  Compression: 60-96% space savings                          │
└─────────────────────────────────────────────────────────────┘

Knowledge Graph (It's Magic)

# Build the graph from your memories
python ~/.claude-memory/graph_engine.py build

# Output:
# ✓ Processed 47 memories
# ✓ Created 12 clusters:
#   - "Authentication & Tokens" (8 memories)
#   - "Performance Optimization" (6 memories)
#   - "React Components" (11 memories)
#   - "Database Queries" (5 memories)
#   ...

The graph automatically discovers relationships. Ask "what relates to auth?" and get JWT, session management, token refresh—even if you never tagged them together.

Pattern Learning (It Knows You)

# Learn patterns from your memories
python ~/.claude-memory/pattern_learner.py update

# Get your coding identity
python ~/.claude-memory/pattern_learner.py context 0.5

# Output:
# Your Coding Identity:
# - Framework preference: React (73% confidence)
# - Style: Performance over readability (58% confidence)
# - Testing: Jest + React Testing Library (65% confidence)
# - API style: REST over GraphQL (81% confidence)

Your AI assistant can now match your preferences automatically.

MACLA Confidence Scoring (v2.4.0): Confidence uses a Bayesian Beta-Binomial posterior (Forouzandeh et al., arXiv:2512.18950). Pattern-specific priors, log-scaled competition, recency bonus. Range: 0.0–0.95 (hard cap prevents overconfidence).

Multi-Profile Support

# Work profile
superlocalmemoryv2:profile create work --description "Day job"
superlocalmemoryv2:profile switch work

# Personal projects
superlocalmemoryv2:profile create personal
superlocalmemoryv2:profile switch personal

# Client projects (completely isolated)
superlocalmemoryv2:profile create client-acme

Each profile has isolated memories, graphs, and patterns. No context bleeding.


📖 Documentation

GuideDescription
Quick StartGet running in 5 minutes
InstallationDetailed setup instructions
Visualization DashboardInteractive web UI guide (NEW v2.2.0)
CLI ReferenceAll commands explained
Knowledge GraphHow clustering works
Pattern LearningIdentity extraction
Profiles GuideMulti-context management
API ReferencePython API documentation

🔧 CLI Commands

# Memory Operations
superlocalmemoryv2:remember "content" --tags tag1,tag2  # Save memory
superlocalmemoryv2:recall "search query"                 # Search
superlocalmemoryv2:list                                  # Recent memories
superlocalmemoryv2:status                                # System health

# Profile Management
superlocalmemoryv2:profile list                          # Show all profiles
superlocalmemoryv2:profile create <name>                 # New profile
superlocalmemoryv2:profile switch <name>                 # Switch context

# Knowledge Graph
python ~/.claude-memory/graph_engine.py build            # Build graph (+ hierarchical + summaries)
python ~/.claude-memory/graph_engine.py stats            # View clusters
python ~/.claude-memory/graph_engine.py related --id 5   # Find related
python ~/.claude-memory/graph_engine.py hierarchical     # Sub-cluster large communities
python ~/.claude-memory/graph_engine.py summaries        # Generate cluster summaries

# Pattern Learning
python ~/.claude-memory/pattern_learner.py update        # Learn patterns
python ~/.claude-memory/pattern_learner.py context 0.5   # Get identity

# Auto-Backup (v2.4.0)
python ~/.claude-memory/auto_backup.py backup            # Manual backup
python ~/.claude-memory/auto_backup.py list              # List backups
python ~/.claude-memory/auto_backup.py status            # Backup status

# Reset (Use with caution!)
superlocalmemoryv2:reset soft                            # Clear memories
superlocalmemoryv2:reset hard --confirm                  # Nuclear option

📊 Performance at a Glance

MetricMeasured Result
Search latency10.6ms median (100 memories)
Concurrent writes220/sec with 2 agents, zero errors
Storage1.9 KB per memory at scale (13.6 MB for 10K)
Trust defense1.0 trust gap (perfect separation)
Graph build0.28s for 100 memories
Search qualityMRR 0.90 (first result correct 9/10 times)

Full benchmark details →


🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Areas for contribution:

  • Additional pattern categories
  • Graph visualization UI
  • Integration with more AI assistants
  • Performance optimizations
  • Documentation improvements

💖 Support This Project

If SuperLocalMemory saves you time, consider supporting its development:


📜 License

MIT License — use freely, even commercially. Just include the license.


👨‍💻 Author

Varun Pratap Bhardwaj — Solution Architect

GitHub

Building tools that make AI actually useful for developers.


Related Servers