Inked
A memory management server for Claude apps with optional AI-powered search, using local SQLite storage.
Inked
A powerful MCP server for memory management with Claude apps. Fast, simple, and optionally enhanced with AI-powered search.
Features
- Fast text search - Lightning-fast memory retrieval by default
- AI-powered search - Optional embedding-based semantic search currently not working as of 06/25/25
- AI reranking - Experimental reranking for even better results
- Simple storage - Plain text storage in SQLite (no encryption overhead)
- Secure - All data stored locally in
~/.inked/
Installation
Option 1: (Recommended)
npm install -g @frgmt/inked
Option 2: Local Development
git clone https://github.com/frgmt/inked.git
cd inked
npm install
npm run build
node dist/index.js
Basic Usage
Add to your MCP server configuration:
Standard (fast text search):
{
"mcpServers": {
"inked": {
"command": "npx",
"args": ["@frgmt/inked"]
}
}
}
^ Use this one. the rest wont work. need to work out some kinks
============IGNORE THIS===========
With AI embeddings (semantic search):
{
"mcpServers": {
"inked": {
"command": "npx",
"args": ["@frgmt/inked", "--use-embeddings"]
}
}
}
With embeddings + AI reranking (best results):
{
"mcpServers": {
"inked": {
"command": "npx",
"args": ["@frgmt/inked", "--use-embeddings", "--use-reranking"]
}
}
}
Experimental Features
AI-Powered Search (Optional)
Inked supports experimental embedding-based search for more nuanced memory retrieval.
Embedding Models
| Flag | Model | Memory Usage | Best For |
|---|---|---|---|
--use-embeddings | Qwen3-0.6B | ~2GB RAM | Short memories, quick responses |
--use-embeddings=4b | Qwen3-4B | ~8GB RAM | Longer memories, better nuance |
--use-embeddings=8b | Qwen3-8B | ~16GB RAM | Complex memories, documents |
Reranking Models (Requires embeddings)
| Flag | Model | Additional Memory | Best For |
|---|---|---|---|
--use-reranking | Qwen3-Reranker-0.6B | ~1GB RAM | Improved relevance |
--use-reranking=4b | Qwen3-Reranker-4B | ~4GB RAM | Best result quality |
How to Choose Models
For most users: Start with no flags (fast text search)
For better semantic understanding: Add --use-embeddings
- Good for finding memories by meaning rather than exact words
- First run downloads ~2GB model (one-time)
For nuanced, longer memories: Use --use-embeddings=4b
- Better at understanding context in longer text
- Handles more complex relationships between ideas
For best results: Add --use-reranking with embeddings
- AI re-scores top candidates for optimal ranking
- Significantly improves search quality
For power users: --use-embeddings=8b --use-reranking=4b
- Best possible search quality
- Requires 20+ GB RAM
- Good for research, documentation, complex projects
Memory Requirements
| Configuration | RAM Needed | Download Size | First Launch |
|---|---|---|---|
| Default (text) | ~50MB | 0MB | Instant |
| Basic embeddings | ~2GB | ~1.2GB | 2-5 minutes |
| 4B embeddings | ~8GB | ~4GB | 5-10 minutes |
| 8B embeddings | ~16GB | ~8GB | 10-20 minutes |
| + Reranking | +1-4GB | +0.5-2GB | +1-3 minutes |
Models are cached locally and only downloaded once
=============END IGNORE===========
Usage Guide
Auto-Memory Setup
Add this to your Claude settings/preferences:
"At the start of new conversations, use the inked Read tool with 'ALL' to load my memories. Only mention memories when directly relevant to our conversation. Use the Write tool to save important preferences, facts, or insights that should be remembered for future conversations."
How It Works
- Read once per conversation: Memories stay in context after initial load
- Silent operation: Claude uses memories without mentioning them unless relevant
- Smart writing: Automatically saves important information for future sessions
When to Write Memories
- User preferences and communication style
- Important project information and context
- Recurring topics or themes
- Facts that should persist across conversations
- Insights or patterns worth remembering
Tools
read
Search and retrieve memories.
Parameters:
search(required): Query string or "ALL" for everythingtopr(optional): Number of results (1-5, default: 3)
write
Add or delete memories.
Parameters:
content(required): Memory text (NEW) or search query (DELETE)sTool(required): "NEW" or "DELETE"id(optional): Specific ID to delete
License
AGPL v3 - Open source for personal use. Commercial use requires either open-sourcing your application or a commercial license.
Related Servers
Trello MCP
Trello Desktop MCP server that enables Claude Desktop to interact with Trello boards, cards, lists, and team members through natural language commands.
Rememberizer
Interact with Rememberizer's document and knowledge management API to search, retrieve, and manage documents.
CData Google Calendars
A read-only MCP server by CData that enables LLMs to query live Google Calendars data. Requires a separate CData JDBC Driver for Google Calendars.
Gmail MCP Server
An MCP server for interacting with Gmail and Google Calendar, enabling context-aware email and event management.
Anki MCP Server
Create Anki flashcards using natural language by connecting to the AnkiConnect add-on.
Superthread MCP Extended
A perfect drop-in replacement to the official Superthread MCP, providing way more tools. Cloudflare Workers based Remote MCP server
Home Assistant MCP
An MCP integration for controlling Home Assistant devices with AI assistants.
system-prompts-mcp-server
Model Context Protocol server exposing system prompt files and summaries.
Scrapbox MCP
A simple notes system server for the Model Context Protocol, providing resources, tools, and prompts.
Mcptix
A simple and powerful ticket tracking system with AI assistant integration.