Claude Conversation Memory System
Provides searchable local storage for Claude conversation history, enabling context retrieval during sessions.
Claude Memory MCP — Universal AI Conversation Memory
A Model Context Protocol (MCP) server that provides persistent, searchable conversation memory across multiple AI platforms. Store, search, and retrieve conversation history with sub-millisecond full-text search powered by SQLite FTS5.
Features
- 🔍 Sub-millisecond full-text search via SQLite FTS5 with relevance ranking
- 🏷️ Automatic topic extraction — 574+ unique topics across 2,000+ associations
- 📊 Weekly summaries with insights and patterns
- 🗃️ Organized file storage by date and topic
- 🤖 Multi-platform support — Claude, ChatGPT, Cursor AI, and custom formats
- 🔌 MCP integration for Claude Desktop and Claude Code
Quick Start
Prerequisites
- Python 3.11+ (tested with 3.11.12)
- Ubuntu/WSL environment recommended
- Claude Desktop (for MCP integration)
Installation
Option 1: Install with Claude Code (Recommended)
Quick Install - Copy and paste this into Claude Code:
claude mcp add --transport stdio claude-memory -- sh -c "cd $HOME/Code/claude-memory-mcp && python3 src/server_fastmcp.py"
Important: Replace $HOME/Code/claude-memory-mcp with the actual path where you cloned this repository.
Examples for different locations:
# If cloned to ~/Code/claude-memory-mcp (default)
claude mcp add --transport stdio claude-memory -- sh -c "cd $HOME/Code/claude-memory-mcp && python3 src/server_fastmcp.py"
# If cloned to ~/projects/claude-memory-mcp
claude mcp add --transport stdio claude-memory -- sh -c "cd $HOME/projects/claude-memory-mcp && python3 src/server_fastmcp.py"
# If cloned to ~/dev/claude-memory-mcp
claude mcp add --transport stdio claude-memory -- sh -c "cd $HOME/dev/claude-memory-mcp && python3 src/server_fastmcp.py"
What this does:
--transport stdio: Uses standard input/output for local processesclaude-memory: Server identifier name--: Separates Claude CLI flags from the server commandsh -c "cd ... && python3 ...": Changes to project directory before running server
This adds the MCP server to your Claude Desktop configuration automatically.
Documentation: https://code.claude.com/docs/en/mcp
Option 2: Manual Installation
-
Clone the repository:
git clone https://github.com/yourusername/claude-memory-mcp.git cd claude-memory-mcp -
Set up virtual environment:
python3 -m venv .venv source .venv/bin/activate -
Install dependencies:
pip install -e .This installs the package in editable mode along with all required dependencies:
mcp[cli]>=1.9.2- Model Context Protocoljsonschema>=4.0.0- JSON schema validationaiofiles>=24.1.0- Async file operations
-
Test the system:
python3 tests/validate_system.py
Basic Usage
Standalone Testing
# Test core functionality
python3 tests/standalone_test.py
MCP Server Mode
# Run as MCP server (from project root)
python3 src/server_fastmcp.py
# Or from src directory
cd src && python3 server_fastmcp.py
Bulk Import
# Import conversations from JSON export
python3 scripts/bulk_import_enhanced.py your_conversations.json
MCP Tools
search_conversations(query, limit=5)
Full-text search across all stored conversations with relevance ranking.
search_by_topic(topic, limit=10)
Find conversations tagged with a specific topic.
add_conversation(content, title, date)
Store a new conversation with automatic topic extraction and FTS indexing.
generate_weekly_summary(week_offset=0)
Generate insights and patterns from recent conversations.
get_search_stats()
View search engine statistics — index size, topic counts, and engine status.
Architecture
~/claude-memory/
├── conversations/
│ ├── 2025/
│ │ └── 06-june/
│ │ └── 2025-06-01_topic-name.md
│ ├── index.json # Search index
│ └── topics.json # Topic frequency
└── summaries/
└── weekly/
└── week-2025-06-01.md
Configuration
Claude Desktop Integration
Add to your Claude Desktop MCP config:
{
"mcpServers": {
"claude-memory": {
"command": "python",
"args": ["/path/to/claude-memory-mcp/server_fastmcp.py"]
}
}
}
Storage Location
Default storage: ~/claude-memory/
Override with environment variable:
export CLAUDE_MEMORY_PATH="/custom/path"
Logging Configuration
Log Format
Switch between human-readable text logs (default) and structured JSON logs for production:
# JSON format (for production log aggregation)
export CLAUDE_MCP_LOG_FORMAT=json
# Text format (default, for development)
export CLAUDE_MCP_LOG_FORMAT=text
JSON Log Example:
{
"timestamp": "2025-01-15T10:30:45",
"level": "INFO",
"logger": "claude_memory_mcp",
"function": "add_conversation",
"line": 145,
"message": "Added conversation successfully",
"context": {
"type": "performance",
"duration_seconds": 0.045,
"conversation_id": "conv_abc123"
}
}
JSON logging is ideal for:
- Production deployments with log aggregation (Datadog, ELK, CloudWatch)
- Automated monitoring and alerting
- Structured log analysis and querying
- Performance tracking and debugging
See docs/json-logging.md for detailed JSON logging documentation.
File Structure
claude-memory-mcp/
├── src/
│ ├── server_fastmcp.py # Main MCP server
│ ├── conversation_memory.py # Core memory engine + SQLite FTS5
│ ├── format_detector.py # Auto-detect AI platform format
│ ├── validators.py # Input validation
│ ├── logging_config.py # Structured logging (text/JSON)
│ ├── importers/ # Platform-specific importers
│ │ ├── chatgpt_importer.py
│ │ ├── claude_importer.py
│ │ ├── cursor_importer.py
│ │ └── generic_importer.py
│ └── schemas/ # JSON schema validation
├── tests/ # 435 tests, 98.68% coverage
├── data/ # Consolidated app data
├── scripts/ # Import and utility scripts
└── docs/ # Documentation
Performance
SQLite FTS5 full-text search, benchmarked against 347 indexed conversations:
- Search Speed: 0.2–0.5ms per query (4.4x faster than linear JSON scanning)
- Topic Search: 0.3–0.4ms across 574 unique topics
- Write Speed: ~33ms per conversation (includes indexing)
- Capacity: 371 conversations in production use over 10 months
- Test Coverage: 98.68% (435 tests) — 0 code smells, 0 security hotspots (SonarCloud verified)
Last benchmarked: April 2026 | Detailed Report
Note for Developers: Performance benchmarks create a ~/claude-memory-test directory for isolated testing. Normal MCP usage only uses ~/claude-memory/. If you see ~/claude-memory-test, it can be safely deleted.
Search Examples
# Technical topics
search_conversations("terraform azure")
search_conversations("mcp server setup")
search_conversations("python debugging")
# Project discussions
search_conversations("interview preparation")
search_conversations("product management")
search_conversations("architecture decisions")
# Specific problems
search_conversations("dependency issues")
search_conversations("authentication error")
search_conversations("deployment configuration")
Development
Adding New Features
- Topic Extraction: Modify
_extract_topics()inConversationMemoryServer - Search Algorithm: Enhance
search_conversations()method - Summary Generation: Improve
generate_weekly_summary()logic
Testing
# Run validation suite
python3 tests/validate_system.py
# Test individual components
python3 tests/standalone_test.py
# Run full test suite with coverage
python3 -m pytest tests/ --ignore=tests/standalone_test.py --cov=src --cov-report=term
# Import test data
python3 scripts/bulk_import_enhanced.py test_data.json --dry-run
Test Data Storage (Developers Only): If you run performance benchmarks or test data generators, they create a ~/claude-memory-test directory to isolate test data from your production ~/claude-memory directory. This is only for development/testing - normal MCP usage does not create this directory.
To clean up test data after running benchmarks:
rm -rf ~/claude-memory-test
Or using the Makefile cleanup target:
make clean-test-data
Troubleshooting
Common Issues
MCP Import Errors:
pip install mcp[cli] # Include CLI extras
Search Returns No Results:
- Check conversation indexing:
ls ~/claude-memory/conversations/index.json - Verify file permissions
- Run validation:
python3 tests/validate_system.py
Weekly Summary Timezone Errors:
- Ensure all datetime objects use consistent timezone handling
- Recent fix addresses timezone-aware vs naive comparison
System Requirements
- Python: 3.11+ (tested with 3.11.12)
- Disk Space: ~10MB per 100 conversations
- Memory: <100MB RAM usage
- OS: Ubuntu/WSL recommended, macOS/Windows compatible
Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Commit changes:
git commit -am 'Add feature' - Push to branch:
git push origin feature-name - Submit a Pull Request
License
MIT License - see LICENSE file for details
Acknowledgments
- Built with Model Context Protocol (MCP)
- Designed for Claude Desktop integration
- Inspired by the need for persistent conversation context
Status: Production ready ✅
Last Updated: April 2026
Version: 2.0.0
関連サーバー
OpenCTI MCP Server
Integrates with the OpenCTI platform to query and retrieve threat intelligence data.
GigAPI Timeseries Lake
An MCP server for GigAPI Timeseries Lake, enabling seamless integration with MCP-compatible clients.
Data Mesh Manager MCP
Discover data products and request access in Data Mesh Manager.
NocoDB
Manage NocoDB server, support read and write databases
Local FAISS
About Local FAISS vector store as an MCP server – drop-in local RAG for Claude / Copilot / Agents.
Memory-Plus
a lightweight, local RAG memory store to record, retrieve, update, delete, and visualize persistent "memories" across sessions—perfect for developers working with multiple AI coders (like Windsurf, Cursor, or Copilot) or anyone who wants their AI to actually remember them.
DuckDB Knowledge Graph Memory
An MCP memory server that uses a DuckDB backend for persistent knowledge graph storage.
MCP for Neo4j
Connects to Neo4j graph databases with ability to use GDS functions ( when available), a read only mode , and set the sample size for schema detection
MRC Data
China apparel supply chain data infrastructure for AI agents — 3,000+ verified suppliers, 350+ lab-tested fabrics, 170+ industrial clusters across 31 provinces. MCP + REST + OpenAPI.
Memory
Knowledge graph-based persistent memory system