SuperLocalMemory V2

Universal, local-first persistent memory for AI assistants. SQLite-based knowledge graph with zero cloud dependencies. Works with 17+ tools (Claude, Cursor, Windsurf, VS Code, etc.). 100% free forever.


Research Paper

SuperLocalMemory: Privacy-Preserving Multi-Agent Memory with Bayesian Trust Defense Against Memory Poisoning

Varun Pratap Bhardwaj, 2026

The paper presents SuperLocalMemory's architecture for defending against OWASP ASI06 memory poisoning through local-first design, Bayesian trust scoring, and adaptive learning-to-rank — all without cloud dependencies or LLM inference calls.

PlatformLink
arXivarXiv:2603.02240
Zenodo (CERN)DOI: 10.5281/zenodo.18709670
ResearchGatePublication Page
Research Portfoliosuperlocalmemory.com/research

If you use SuperLocalMemory in your research, please cite:

@article{bhardwaj2026superlocalmemory,
  title={SuperLocalMemory: Privacy-Preserving Multi-Agent Memory with Bayesian Trust Defense Against Memory Poisoning},
  author={Bhardwaj, Varun Pratap},
  year={2026},
  eprint={2603.02240},
  archivePrefix={arXiv},
  primaryClass={cs.AI},
  url={https://arxiv.org/abs/2603.02240}
}

What's New in v2.8 — "Memory That Manages Itself"

SuperLocalMemory now manages its own memory lifecycle, learns from action outcomes, and provides enterprise-grade compliance — all 100% locally on your machine.

Memory Lifecycle Management (v2.8)

Memories automatically transition through lifecycle states based on usage patterns:

  • Active — Frequently used, instantly available
  • Warm — Recently used, included in searches
  • Cold — Older, retrievable on demand
  • Archived — Compressed, restorable when needed

Configure bounds to keep your memory system fast:

# Check lifecycle status
slm lifecycle-status

# Compact stale memories
slm compact --dry-run

Behavioral Learning (v2.8)

The system learns from what works:

  • Report outcomes: slm report-outcome --memory-ids 1,5 --outcome success
  • View patterns: slm behavioral-patterns
  • Knowledge transfers across projects automatically

Enterprise Compliance (v2.8)

Built for regulated environments:

  • Access Control — Attribute-based policies (ABAC)
  • Audit Trail — Tamper-evident event logging
  • Retention Policies — GDPR erasure, HIPAA retention, EU AI Act compliance

New MCP Tools (v2.8)

ToolPurpose
report_outcomeRecord action outcomes for behavioral learning
get_lifecycle_statusView memory lifecycle states
set_retention_policyConfigure retention policies
compact_memoriesTrigger lifecycle transitions
get_behavioral_patternsView learned behavioral patterns
audit_trailQuery compliance audit trail

Performance

OperationLatency
Lifecycle evaluationSub-2ms
Access control checkSub-1ms
Feature vector (20-dim)Sub-5ms

Upgrade: npm install -g superlocalmemory@latest — All v2.7 behavior preserved, zero breaking changes.

Upgrading to v2.8 | Full Changelog


SuperLocalMemory learns your patterns, adapts to your workflow, and personalizes recall — all 100% locally. No cloud. No LLM. Your behavioral data never leaves your device.

  • Adaptive Learning — Learns tech preferences, project context, and workflow patterns
  • Three-Phase Ranking — Baseline → Rule-Based → ML Ranking (gets smarter over time)
  • Privacy by Design — Learning data stored separately, one-command GDPR erasure
  • 3 New MCP Tools — Feedback signal, pattern transparency, and user correction
  • Fully interactive visualization with zoom, pan, and click-to-explore
  • 6 layout algorithms, smart cluster filtering, 10,000+ node performance
  • Mobile & accessibility support: touch gestures, keyboard nav, screen reader

What's New in v2.6

SuperLocalMemory is now production-hardened with security, performance, and scale improvements:

  • Trust Enforcement — Bayesian scoring actively protects your memory. Agents with trust below 0.3 are blocked from write/delete operations.
  • Profile Isolation — Memory profiles fully sandboxed. Zero cross-profile data leakage.
  • Rate Limiting — Protects against memory flooding from misbehaving agents.
  • HNSW-Accelerated Graphs — Knowledge graph edge building uses HNSW index for faster construction at scale.
  • Hybrid Search Engine — Combined semantic + FTS5 + graph retrieval for maximum accuracy.

v2.5 highlights (included): Real-time event stream, WAL-mode concurrent writes, agent tracking, memory provenance, 28 API endpoints.

Upgrade: npm install -g superlocalmemory@latest

Interactive Architecture Diagram | Architecture Doc | Full Changelog


The Problem

Every time you start a new Claude session:

You: "Remember that authentication bug we fixed last week?"
Claude: "I don't have access to previous conversations..."
You: *sighs and explains everything again*

AI assistants forget everything between sessions. You waste time re-explaining your:

  • Project architecture
  • Coding preferences
  • Previous decisions
  • Debugging history

The Solution

# Install in one command
npm install -g superlocalmemory

# Save a memory
superlocalmemoryv2-remember "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

# Later, in a new session...
superlocalmemoryv2-recall "auth bug"
# ✓ Found: "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

Your AI now remembers everything. Forever. Locally. For free.


🚀 Quick Start

Install (One Command)

npm install -g superlocalmemory

Or clone manually:

git clone https://github.com/varun369/SuperLocalMemoryV2.git && cd SuperLocalMemoryV2 && ./install.sh

Both methods auto-detect and configure 17+ IDEs and AI tools — Cursor, VS Code/Copilot, Codex, Claude, Windsurf, Gemini CLI, JetBrains, and more.

Verify Installation

superlocalmemoryv2-status
# ✓ Database: OK (0 memories)
# ✓ Graph: Ready
# ✓ Patterns: Ready

That's it. No Docker. No API keys. No cloud accounts. No configuration.

Launch Dashboard

# Start the interactive web UI
python3 ~/.claude-memory/ui_server.py

# Opens at http://localhost:8765
# Features: Timeline, search, interactive graph, statistics

💡 Why SuperLocalMemory?

For Developers Who Use AI Daily

ScenarioWithout MemoryWith SuperLocalMemory
New Claude sessionRe-explain entire projectrecall "project context" → instant context
Debugging"We tried X last week..." starts overKnowledge graph shows related past fixes
Code preferences"I prefer React..." every timePattern learning knows your style
Multi-projectContext constantly bleedsSeparate profiles per project

Built on Peer-Reviewed Research

Not another simple key-value store. SuperLocalMemory implements cutting-edge memory architecture backed by peer-reviewed research — hierarchical organization, knowledge graph clustering, identity pattern learning, multi-level retrieval, adaptive re-ranking, workflow sequence mining, temporal confidence scoring, and cold-start mitigation.

The only open-source implementation combining all these approaches — entirely locally.

Read the paper →


✨ Features

Multi-Layer Memory Architecture

View Interactive Architecture Diagram — Click any layer for details, research references, and file paths.

┌─────────────────────────────────────────────────────────────┐
│  Layer 9: VISUALIZATION (v2.2+)                             │
│  Interactive dashboard: timeline, graph explorer, analytics │
├─────────────────────────────────────────────────────────────┤
│  Layer 8: HYBRID SEARCH (v2.2+)                             │
│  Combines: Semantic + FTS5 + Graph traversal                │
├─────────────────────────────────────────────────────────────┤
│  Layer 7: UNIVERSAL ACCESS                                  │
│  MCP + Skills + CLI (works everywhere)                      │
│  17+ IDEs with single database                              │
├─────────────────────────────────────────────────────────────┤
│  Layer 6: MCP INTEGRATION                                   │
│  Model Context Protocol: 18 tools, 6 resources, 2 prompts   │
│  Auto-configured for Cursor, Windsurf, Claude               │
├─────────────────────────────────────────────────────────────┤
│  Layer 5½: ADAPTIVE LEARNING (v2.7 — NEW)                   │
│  Three-layer learning: tech prefs + project context + flow  │
│  Local ML re-ranking — no cloud, no telemetry               │
├─────────────────────────────────────────────────────────────┤
│  Layer 5: SKILLS LAYER                                      │
│  7 universal slash-commands for AI assistants               │
│  Compatible with Claude Code, Continue, Cody                │
├─────────────────────────────────────────────────────────────┤
│  Layer 4: PATTERN LEARNING                                  │
│  Confidence-scored preference detection                     │
│  "You prefer React over Vue" (73% confidence)               │
├─────────────────────────────────────────────────────────────┤
│  Layer 3: KNOWLEDGE GRAPH + HIERARCHICAL CLUSTERING         │
│  Auto-clustering: "Python" → "Web API" → "Auth"            │
│  Community summaries with auto-generated labels             │
├─────────────────────────────────────────────────────────────┤
│  Layer 2: HIERARCHICAL INDEX                                │
│  Tree structure for fast navigation                         │
│  O(log n) lookups instead of O(n) scans                     │
├─────────────────────────────────────────────────────────────┤
│  Layer 1: RAW STORAGE                                       │
│  SQLite + Full-text search + vector search                  │
│  Compression: 60-96% space savings                          │
└─────────────────────────────────────────────────────────────┘

Key Capabilities

  • Adaptive Learning System — Learns your tech preferences, workflow patterns, and project context. Personalizes recall ranking using local ML. Zero cloud dependency. New in v2.7
  • Knowledge Graphs — Automatic relationship discovery. Interactive visualization with zoom, pan, click.
  • Pattern Learning — Learns your coding preferences and style automatically.
  • Multi-Profile Support — Isolated contexts for work, personal, clients. Zero context bleeding.
  • Hybrid Search — Semantic + FTS5 + Graph retrieval combined for maximum accuracy.
  • Visualization Dashboard — Web UI for timeline, search, graph exploration, analytics.
  • Framework Integrations — Use with LangChain and LlamaIndex applications.
  • Real-Time Events — Live notifications via SSE/WebSocket/Webhooks when memories change.
  • Memory Lifecycle — Automatic state transitions (Active → Warm → Cold → Archived) with bounded growth guarantees. New in v2.8
  • Behavioral Learning — Learns from action outcomes, extracts success/failure patterns, transfers knowledge across projects. New in v2.8
  • Enterprise Compliance — ABAC access control, tamper-evident audit trail, GDPR/HIPAA/EU AI Act retention policies. New in v2.8

🌐 Works Everywhere

SuperLocalMemory is the ONLY memory system that works across ALL your tools:

Supported IDEs & Tools

ToolIntegrationHow It Works
Claude Code✅ Skills + MCP/superlocalmemoryv2-remember
Cursor✅ MCP + SkillsAI uses memory tools natively
Windsurf✅ MCP + SkillsNative memory access
Claude Desktop✅ MCPBuilt-in support
OpenAI Codex✅ MCP + SkillsAuto-configured (TOML)
VS Code / Copilot✅ MCP + Skills.vscode/mcp.json
Continue.dev✅ MCP + Skills/slm-remember
Cody✅ Custom Commands/slm-remember
Gemini CLI✅ MCP + SkillsNative MCP + skills
JetBrains IDEs✅ MCPVia AI Assistant settings
Zed Editor✅ MCPNative MCP tools
Aider✅ Smart Wrapperaider-smart with context
Any Terminal✅ Universal CLIslm remember "content"

Three Ways to Access

  1. MCP (Model Context Protocol) — Auto-configured for Cursor, Windsurf, Claude Desktop

    • AI assistants get natural access to your memory
    • No manual commands needed
    • "Remember that we use this framework" just works
  2. Skills & Commands — For Claude Code, Continue.dev, Cody

    • /superlocalmemoryv2-remember in Claude Code
    • /slm-remember in Continue.dev and Cody
    • Familiar slash command interface
  3. Universal CLI — Works in any terminal or script

    • slm remember "content" - Simple, clean syntax
    • slm recall "query" - Search from anywhere
    • aider-smart - Aider with auto-context injection

All three methods use the SAME local database. No data duplication, no conflicts.

Complete setup guide for all tools →


🆚 vs Alternatives

The Hard Truth About "Free" Tiers

SolutionFree Tier LimitsPaid PriceWhat's Missing
Mem010K memories, limited APIUsage-basedNo pattern learning, not local
ZepLimited credits$50/monthCredit system, cloud-only
Supermemory1M tokens, 10K queries$19-399/moNot local, no graphs
Personal.AI❌ No free tier$33/monthCloud-only, closed ecosystem
Letta/MemGPTSelf-hosted (complex)TBDRequires significant setup
SuperLocalMemoryUnlimited$0 foreverNothing.

What Actually Matters

FeatureMem0ZepKhojLettaSuperLocalMemory
Works in CursorCloud OnlyLocal
Works in WindsurfCloud OnlyLocal
Works in VS Code3rd PartyPartialNative
Universal CLI
Multi-Layer Architecture
Pattern Learning
Adaptive ML RankingCloud LLMLocal ML
Knowledge Graphs
100% LocalPartialPartial
GDPR by Design
Zero Setup
Completely FreeLimitedLimitedPartial

SuperLocalMemory is the ONLY solution that:

  • Learns and adapts locally — no cloud LLM needed for personalization
  • ✅ Works across 17+ IDEs and CLI tools
  • ✅ Remains 100% local (no cloud dependencies)
  • ✅ GDPR Article 17 compliant — one-command data erasure
  • ✅ Completely free with unlimited memories

See full competitive analysis →


⚡ Measured Performance

All numbers measured on real hardware (Apple M4 Pro, 24GB RAM). No estimates — real benchmarks.

Search Speed

Database SizeMedian LatencyP95 Latency
100 memories10.6ms14.9ms
500 memories65.2ms101.7ms
1,000 memories124.3ms190.1ms

For typical personal use (under 500 memories), search results return faster than you blink.

Concurrent Writes — Zero Errors

ScenarioWrites/secErrors
1 AI tool writing204/sec0
2 AI tools simultaneously220/sec0
5 AI tools simultaneously130/sec0

Concurrent-safe architecture = zero "database is locked" errors, ever.

Storage

10,000 memories = 13.6 MB on disk (~1.4 KB per memory). Your entire AI memory history takes less space than a photo.

Graph Construction

MemoriesBuild Time
1000.28s
1,00010.6s

Auto-clustering discovers 6-7 natural topic communities from your memories.

Full benchmark details →


🔧 CLI Commands

# Memory Operations
superlocalmemoryv2-remember "content" --tags tag1,tag2  # Save memory
superlocalmemoryv2-recall "search query"                 # Search
superlocalmemoryv2-list                                  # Recent memories
superlocalmemoryv2-status                                # System health

# Profile Management
superlocalmemoryv2-profile list                          # Show all profiles
superlocalmemoryv2-profile create <name>                 # New profile
superlocalmemoryv2-profile switch <name>                 # Switch context

# Knowledge Graph
python ~/.claude-memory/graph_engine.py build            # Build graph
python ~/.claude-memory/graph_engine.py stats            # View clusters

# Pattern Learning
python ~/.claude-memory/pattern_learner.py update        # Learn patterns
python ~/.claude-memory/pattern_learner.py context 0.5   # Get identity

# Visualization Dashboard
python ~/.claude-memory/ui_server.py                     # Launch web UI

Complete CLI reference →


📖 Documentation

GuideDescription
Quick StartGet running in 5 minutes
InstallationDetailed setup instructions
Visualization DashboardInteractive web UI guide
Interactive GraphGraph exploration guide (NEW v2.6.5)
Framework IntegrationsLangChain & LlamaIndex setup
Knowledge GraphHow clustering works
Pattern LearningIdentity extraction
Memory LifecycleLifecycle states, compaction, bounded growth (v2.8)
Behavioral LearningAction outcomes, pattern extraction (v2.8)
Enterprise ComplianceABAC, audit trail, retention policies (v2.8)
Upgrading to v2.8Migration guide from v2.7
API ReferencePython API documentation

🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Areas for contribution:

  • Additional pattern categories
  • Performance optimizations
  • Integration with more AI assistants
  • Documentation improvements

💖 Support This Project

If SuperLocalMemory saves you time, consider supporting its development:


📜 License

MIT License — use freely, even commercially. Just include the license.


👨‍💻 Author

Varun Pratap Bhardwaj — Founder, Qualixar · Solution Architect

GitHub

Building the complete agent development platform at Qualixar — memory, testing, contracts, and security for AI agents.

Part of the Qualixar Agent Development Platform

SuperLocalMemory is part of Qualixar, a suite of open-source tools for building reliable AI agents:

ProductWhat It Does
SuperLocalMemoryLocal-first AI agent memory
SkillFortifyAgent skill supply chain security

Related Servers