Memlord
Self-hosted MCP memory server for personal use and teams
Self-hosted MCP memory server for personal use and teams
Quickstart • How It Works • MCP Tools • Configuration • Requirements • License
✨ Features
- 🔍 Hybrid search — BM25 (full-text) + vector KNN (pgvector) fused via Reciprocal Rank Fusion
- 📂 Multi-user — each user sees only their own memories; workspaces for shared team knowledge
- 🛠️ 10 MCP tools — store, retrieve, recall, list, search by tag, get, update, delete, move, list workspaces
- 🌐 Web UI — browse, search, edit and delete memories in the browser; export/import JSON
- 🔒 OAuth 2.1 — full in-process authorization server, always enabled
- 🐘 PostgreSQL — pgvector for embeddings, tsvector for full-text search
- 📊 Progressive disclosure — search returns compact snippets by default; call
get_memory(id)only for what you need, reducing token usage - 🔁 Deduplication — automatically detects near-identical memories before saving, preventing noise accumulation
🆚 How Memlord compares
| Memlord | OpenMemory | mcp-memory-service | basic-memory | |
|---|---|---|---|---|
| Search | BM25 + vector + RRF | Vector only (Qdrant) | BM25 + vector + RRF | BM25 + vector |
| Embeddings | Local ONNX, zero config | OpenAI default; Ollama optional | Local ONNX, zero config | Local FastEmbed |
| Storage | PostgreSQL + pgvector | PostgreSQL + Qdrant | SQLite-vec / Cloudflare Vectorize | SQLite + Markdown files |
| Multi-user | ✅ | ❌ single-user in practice | ⚠️ agent-ID scoping, no isolation | ❌ |
| Workspaces | ✅ shared + personal, invite links | ⚠️ "Apps" namespace | ⚠️ tags + conversation_id | ✅ per-project flag |
| Authentication | ✅ OAuth 2.1 | ❌ none (self-hosted) | ✅ OAuth 2.0 + PKCE | ❌ |
| Web UI | ✅ browse, edit, export | ✅ Next.js dashboard | ✅ rich UI, graph viz, quality scores | ❌ local; cloud only |
| MCP tools | 10 | 5 | 15+ | ~20 |
| Self-hosted | ✅ single process | ✅ Docker (3 containers) | ✅ | ✅ |
| Memory input | Manual (explicit store) | Auto-extracted by LLM | Manual | Manual (Markdown notes) |
| Memory types | fact / preference / instruction / feedback | auto-extracted facts | — | observations + wiki links |
| Time-aware search | ✅ natural language dates | ⚠️ REST only, not in MCP tools | — | ✅ recent_activity |
| Token efficiency | ✅ progressive disclosure | ❌ | — | ✅ build_context traversal |
| Import / Export | ✅ JSON | ✅ ZIP (JSON + JSONL) | — | ✅ Markdown (human-readable) |
| License | AGPL-3.0 / Commercial | Apache 2.0 | Apache 2.0 | AGPL-3.0 |
Where competitors have a real edge:
- OpenMemory — auto-extracts memories from raw conversation text; no need to decide what to store manually; good import/export
- mcp-memory-service — richer web UI (graph visualization, quality scoring, 8 tabs); more permissive license (Apache 2.0); multiple transport options (stdio, SSE, HTTP)
- basic-memory — memories are human-readable Markdown files you can edit, version-control, and read without any server; wiki-style entity links form a local knowledge graph; ~20 MCP tools
When to pick Memlord:
- You want zero-config local embeddings — ONNX model ships with the server, no Ollama or external API needed
- You run a multi-user team server with proper OAuth 2.1 auth and invite-based workspaces
- You want a production-grade database (PostgreSQL) that scales beyond a single machine's SQLite
- You manage memories explicitly — store exactly what matters, typed and tagged, not everything the LLM decides to extract
- You want a self-hosted Web UI with full CRUD and JSON export, without a cloud subscription
🚀 Quickstart
# Install dependencies
uv sync --dev
# Download ONNX model (~23 MB)
uv run python scripts/download_model.py
# Run migrations
alembic upgrade head
# Start the server
memlord
Open http://localhost:8000 for the Web UI. The MCP endpoint is at /mcp.
🐳 Docker
cp .env.example .env
docker compose up
🔍 How It Works
Each search request runs BM25 and vector KNN in parallel, then merges results via Reciprocal Rank Fusion:
flowchart TD
Q([query]) --> BM25["BM25\nsearch_vector @@ websearch_to_tsquery"]
Q --> EMB["ONNX embed\nall-MiniLM-L6-v2 · 384d · local"]
EMB --> KNN["KNN\nembedding <=> query_vector\ncosine distance"]
BM25 --> RRF["RRF fusion\nscore = 1/(k+rank_bm25) + 1/(k+rank_vec)\nk=60"]
KNN --> RRF
RRF --> R([top-N results])
⚙️ Configuration
All settings use the MEMLORD_ prefix. See .env.example for the full list.
| Variable | Default | Description |
|---|---|---|
MEMLORD_DB_URL | postgresql+asyncpg://postgres:postgres@localhost/memlord | PostgreSQL connection URL |
MEMLORD_PORT | 8000 | Server port |
MEMLORD_BASE_URL | http://localhost:8000 | Public URL for OAuth |
MEMLORD_OAUTH_JWT_SECRET | memlord-dev-secret-please-change | JWT signing secret |
OAuth is always enabled. Set MEMLORD_BASE_URL to your public URL and change MEMLORD_OAUTH_JWT_SECRET before
deploying.
🛠️ MCP Tools
| Tool | Description |
|---|---|
store_memory | Save a memory (idempotent by content); raises on near-duplicates |
retrieve_memory | Hybrid semantic + full-text search; returns snippets by default |
recall_memory | Search by natural-language time expression; returns snippets by default |
list_memories | Paginated list with type/tag filters |
search_by_tag | AND/OR tag search |
get_memory | Fetch a single memory by ID with full content |
update_memory | Update content, type, tags, or metadata by ID |
delete_memory | Delete by ID |
move_memory | Move a memory to a different workspace |
list_workspaces | List workspaces you are a member of (including personal) |
Workspace management (create, invite, join, leave) is handled via the Web UI.
💻 System Requirements
- Python 3.12
- PostgreSQL ≥ 15 with pgvector extension
- uv — Python package manager
👨💻 Development
pyright src/ # type check
black . # format
pytest # run tests
alembic-autogen-check # verify migrations are up to date
📄 License
Memlord is dual-licensed:
- AGPL-3.0 — free for open-source use. If you run a modified version as a network service, you must publish your source code.
- Commercial License — for proprietary or closed-source deployments. Contact [email protected] or [email protected] to purchase.
Server Terkait
Redis MCP Server
An MCP server for interacting with Redis databases, enabling LLMs to access key-value stores.
Qdrant MCP
Semantic search using the Qdrant vector database.
OceanBase
MCP Server for OceanBase database and its tools
Baserow
Read and write access to your Baserow tables.
Quran Cloud
Access the Quran API from alquran.cloud to retrieve accurate Quranic text and reduce LLM hallucinations.
Supavec MCP Server
Fetch relevant content from Supavec, a vector database service.
Simple Loki MCP Server
Query Grafana Loki logs using logcli with an automatic fallback to the HTTP API.
UniProt MCP Server
Access UniProt protein information, including function and sequence data.
CData Avalara AvaTax
A read-only MCP server for querying live Avalara AvaTax data. Powered by the CData JDBC Driver.
Subgraph MCP Server
Allows LLMs to interact with Subgraphs available on The Graph Network.