Inked
A memory management server for Claude apps with optional AI-powered search, using local SQLite storage.
Inked
A powerful MCP server for memory management with Claude apps. Fast, simple, and optionally enhanced with AI-powered search.
Features
- Fast text search - Lightning-fast memory retrieval by default
- AI-powered search - Optional embedding-based semantic search currently not working as of 06/25/25
- AI reranking - Experimental reranking for even better results
- Simple storage - Plain text storage in SQLite (no encryption overhead)
- Secure - All data stored locally in
~/.inked/
Installation
Option 1: (Recommended)
npm install -g @frgmt/inked
Option 2: Local Development
git clone https://github.com/frgmt/inked.git
cd inked
npm install
npm run build
node dist/index.js
Basic Usage
Add to your MCP server configuration:
Standard (fast text search):
{
"mcpServers": {
"inked": {
"command": "npx",
"args": ["@frgmt/inked"]
}
}
}
^ Use this one. the rest wont work. need to work out some kinks
============IGNORE THIS===========
With AI embeddings (semantic search):
{
"mcpServers": {
"inked": {
"command": "npx",
"args": ["@frgmt/inked", "--use-embeddings"]
}
}
}
With embeddings + AI reranking (best results):
{
"mcpServers": {
"inked": {
"command": "npx",
"args": ["@frgmt/inked", "--use-embeddings", "--use-reranking"]
}
}
}
Experimental Features
AI-Powered Search (Optional)
Inked supports experimental embedding-based search for more nuanced memory retrieval.
Embedding Models
| Flag | Model | Memory Usage | Best For |
|---|---|---|---|
--use-embeddings | Qwen3-0.6B | ~2GB RAM | Short memories, quick responses |
--use-embeddings=4b | Qwen3-4B | ~8GB RAM | Longer memories, better nuance |
--use-embeddings=8b | Qwen3-8B | ~16GB RAM | Complex memories, documents |
Reranking Models (Requires embeddings)
| Flag | Model | Additional Memory | Best For |
|---|---|---|---|
--use-reranking | Qwen3-Reranker-0.6B | ~1GB RAM | Improved relevance |
--use-reranking=4b | Qwen3-Reranker-4B | ~4GB RAM | Best result quality |
How to Choose Models
For most users: Start with no flags (fast text search)
For better semantic understanding: Add --use-embeddings
- Good for finding memories by meaning rather than exact words
- First run downloads ~2GB model (one-time)
For nuanced, longer memories: Use --use-embeddings=4b
- Better at understanding context in longer text
- Handles more complex relationships between ideas
For best results: Add --use-reranking with embeddings
- AI re-scores top candidates for optimal ranking
- Significantly improves search quality
For power users: --use-embeddings=8b --use-reranking=4b
- Best possible search quality
- Requires 20+ GB RAM
- Good for research, documentation, complex projects
Memory Requirements
| Configuration | RAM Needed | Download Size | First Launch |
|---|---|---|---|
| Default (text) | ~50MB | 0MB | Instant |
| Basic embeddings | ~2GB | ~1.2GB | 2-5 minutes |
| 4B embeddings | ~8GB | ~4GB | 5-10 minutes |
| 8B embeddings | ~16GB | ~8GB | 10-20 minutes |
| + Reranking | +1-4GB | +0.5-2GB | +1-3 minutes |
Models are cached locally and only downloaded once
=============END IGNORE===========
Usage Guide
Auto-Memory Setup
Add this to your Claude settings/preferences:
"At the start of new conversations, use the inked Read tool with 'ALL' to load my memories. Only mention memories when directly relevant to our conversation. Use the Write tool to save important preferences, facts, or insights that should be remembered for future conversations."
How It Works
- Read once per conversation: Memories stay in context after initial load
- Silent operation: Claude uses memories without mentioning them unless relevant
- Smart writing: Automatically saves important information for future sessions
When to Write Memories
- User preferences and communication style
- Important project information and context
- Recurring topics or themes
- Facts that should persist across conversations
- Insights or patterns worth remembering
Tools
read
Search and retrieve memories.
Parameters:
search(required): Query string or "ALL" for everythingtopr(optional): Number of results (1-5, default: 3)
write
Add or delete memories.
Parameters:
content(required): Memory text (NEW) or search query (DELETE)sTool(required): "NEW" or "DELETE"id(optional): Specific ID to delete
License
AGPL v3 - Open source for personal use. Commercial use requires either open-sourcing your application or a commercial license.
関連サーバー
Kone.vc
スポンサーMonetize your AI agent with contextual product recommendations
Siri Shortcuts
List, open, and run shortcuts from the macOS Shortcuts app.
ServiceTitan MCP Server
An MCP server for integrating with the ServiceTitan platform.
Redmine MCP
Integrates Claude AI with the Redmine project management system to enhance project management tasks.
Brivvy MCP
Connects AI assistants to your Brivvy workspace so they can use your brand voices, templates, and glossary for on-brand content—hosted at mcp.brivvy.io.
Coda
Interact with the Coda API to manage documents and pages, including creating, reading, updating, and deleting.
Jira
An MCP server for interacting with Jira's REST API to manage projects, issues, and users.
EmoVideoAna
analytics emotion of video
ChatExcel
A powerful server for Excel file processing, data analysis, and visualization, leveraging Python and Go for high performance.
AppContext MCP
AppContext gives your AI coding agent instant visual insight into what you're developing, so it can fix issues, refine UI, and accelerate your development workflow in real time.
OmniFocus MCP Server
Integrate OmniFocus with Claude Desktop for AI-powered task management and weekly reviews.