Enables persistent knowledge storage for Claude using a knowledge graph with multiple database backends like PostgreSQL and SQLite.
A simple way to give LLMs persistent memory across conversations. This server lets Claude or vscode remember information about you, your projects, and your preferences using a knowledge graph.
Key Features:
Follow these steps in order to get the knowledge graph working with Claude:
Option A: NPX (Easiest - No download needed)
# Test that it works
npx knowledgegraph-mcp --help
Option B: Docker
# Clone and build
git clone https://github.com/n-r-w/knowledgegraph-mcp.git
cd knowledgegraph-mcp
docker build -t knowledgegraph-mcp .
SQLite (Default - No setup needed):
[you home folder]/.knowledge-graph/
PostgreSQL (For advanced users):
CREATE DATABASE knowledgegraph;
Edit your Claude Desktop configuration file:
Find your config file:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
~/.config/Claude/claude_desktop_config.json
If you chose NPX + SQLite (default and easiest):
{
"mcpServers": {
"Knowledge Graph": {
"command": "npx",
"args": ["-y", "knowledgegraph-mcp"]
}
}
}
Note: SQLite will automatically create the database in
[you home folder]/.knowledge-graph/knowledgegraph.db
. To use a custom location, add:"KNOWLEDGEGRAPH_SQLITE_PATH": "/path/to/your/database.db"
If you chose Docker + SQLite (default):
{
"mcpServers": {
"Knowledge Graph": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-v", "[you home folder]/.knowledge-graph:/app/.knowledge-graph",
"knowledgegraph-mcp"
]
}
}
}
Note: The volume mount ensures your data persists between Docker runs. For custom paths, add:
-e KNOWLEDGEGRAPH_SQLITE_PATH=/app/.knowledge-graph/custom.db
If you chose PostgreSQL:
{
"mcpServers": {
"Knowledge Graph": {
"command": "npx",
"args": ["-y", "knowledgegraph-mcp"],
"env": {
"KNOWLEDGEGRAPH_STORAGE_TYPE": "postgresql",
"KNOWLEDGEGRAPH_CONNECTION_STRING": "postgresql://postgres:yourpassword@localhost:5432/knowledgegraph"
}
}
}
}
If you also want to use this with VS Code, add this to your User Settings (JSON) or create .vscode/mcp.json
:
Using NPX + SQLite (default):
{
"mcp": {
"servers": {
"Knowledge Graph": {
"command": "npx",
"args": ["-y", "knowledgegraph-mcp"],
}
}
}
}
Using Docker (default SQLite):
{
"mcp": {
"servers": {
"Knowledge Graph": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "KNOWLEDGEGRAPH_CONNECTION_STRING=sqlite://./knowledgegraph.db",
"knowledgegraph-mcp"
]
}
}
}
}
Using Docker + PostgreSQL:
First, ensure your PostgreSQL database is set up:
# Create the database (run this once)
psql -h 127.0.0.1 -p 5432 -U postgres -c "CREATE DATABASE knowledgegraph;"
Then configure VS Code:
{
"mcp": {
"servers": {
"Knowledge Graph": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"--network", "host",
"-e", "KNOWLEDGEGRAPH_STORAGE_TYPE=postgresql",
"-e", "KNOWLEDGEGRAPH_CONNECTION_STRING=postgresql://postgres:yourpassword@127.0.0.1:5432/knowledgegraph",
"knowledgegraph-mcp"
]
}
}
}
}
Alternative Docker + PostgreSQL (if --network host
doesn't work):
{
"mcp": {
"servers": {
"Knowledge Graph": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"--add-host", "host.docker.internal:host-gateway",
"-e", "KNOWLEDGEGRAPH_STORAGE_TYPE=postgresql",
"-e", "KNOWLEDGEGRAPH_CONNECTION_STRING=postgresql://postgres:yourpassword@host.docker.internal:5432/knowledgegraph",
"knowledgegraph-mcp"
]
}
}
}
}
Important Notes:
- Replace
yourpassword
with your actual PostgreSQL password- Ensure the
knowledgegraph
database exists before starting- If you get connection errors, try the alternative configuration above
- For troubleshooting Docker + PostgreSQL issues, see the Common Issues section
Customization:
LLM Compatibility:
Explain STEP-BY-STEP why you didn't use the knowledge graph? DO NOT DO ANYTHING ELSE
to get a detailed report and identify issues with the instructions.Available Prompts:
Close and reopen Claude Desktop. You should now see "Knowledge Graph" in your available tools.
Quick Test Commands for LLMs:
Note: The service includes comprehensive input validation to prevent errors. If you encounter any issues, check the Troubleshooting Guide for common solutions.
The knowledge graph enables powerful queries through four interconnected concepts:
Store people, projects, companies, technologies as searchable entities.
Real Example - Project Management:
{
"name": "Sarah_Chen",
"entityType": "person",
"observations": ["Senior React developer", "Leads frontend team", "Available for urgent tasks"],
"tags": ["developer", "team-lead", "available"]
}
LLM Benefit: Find "all available team leads" instantly with tag search.
Connect entities to answer complex questions like "Who works on what?"
Real Example - Team Structure:
{
"from": "Sarah_Chen",
"to": "Project_Alpha",
"relationType": "leads"
}
LLM Benefit: Query "Find all projects Sarah leads" or "Who leads Project Alpha?"
Store specific, searchable facts about entities.
Real Examples - Actionable Information:
Enable immediate status and category searches.
Real Examples - Project Workflow:
["urgent", "in-progress", "frontend"]
→ Find urgent frontend tasks["completed", "bug-fix"]
→ Track completed bug fixes["available", "senior"]
→ Find available senior staffThe server supports several environment variables for customization:
KNOWLEDGEGRAPH_STORAGE_TYPE
: Database type (sqlite
or postgresql
, default: sqlite
)KNOWLEDGEGRAPH_CONNECTION_STRING
: Database connection stringKNOWLEDGEGRAPH_SQLITE_PATH
: Custom SQLite database path (optional)KNOWLEDGEGRAPH_PROJECT
: Project identifier for data isolation (default: knowledgegraph_default_project
)KNOWLEDGEGRAPH_SEARCH_MAX_RESULTS
: Maximum number of results to return from database searches (default: 100
, max: 1000
)KNOWLEDGEGRAPH_SEARCH_BATCH_SIZE
: Batch size for processing large query arrays (default: 10
, max: 50
)KNOWLEDGEGRAPH_SEARCH_MAX_CLIENT_ENTITIES
: Maximum number of entities to load for client-side search (default: 10000
, max: 100000
)KNOWLEDGEGRAPH_SEARCH_CLIENT_CHUNK_SIZE
: Chunk size for processing large datasets in client-side search (default: 1000
, max: 10000
)Note: Search limits are automatically validated and clamped to safe ranges to prevent performance issues.
The search system includes several performance optimizations:
Entity Loading Limits:
KNOWLEDGEGRAPH_SEARCH_MAX_CLIENT_ENTITIES
limits how many entities are loaded for client-side searchChunked Processing:
KNOWLEDGEGRAPH_SEARCH_CLIENT_CHUNK_SIZE
controls chunk size for large entity setsRecommended Values by Dataset Size:
KNOWLEDGEGRAPH_SEARCH_MAX_CLIENT_ENTITIES=5000
, KNOWLEDGEGRAPH_SEARCH_CLIENT_CHUNK_SIZE=500
KNOWLEDGEGRAPH_SEARCH_MAX_CLIENT_ENTITIES=2000
, KNOWLEDGEGRAPH_SEARCH_CLIENT_CHUNK_SIZE=200
Performance Monitoring:
The server provides these tools for managing your knowledge graph:
CREATE new entities (people, concepts, objects) in knowledge graph.
Input:
entities
(Entity[]): Array of entity objects. Each REQUIRES:
name
(string): Unique identifier, non-emptyentityType
(string): Category (e.g., 'person', 'project'), non-emptyobservations
(string[]): Facts about entity, MUST contain ≥1 non-empty stringtags
(string[], optional): Exact-match labels for filteringproject_id
(string, optional): Project name to isolate dataCONNECT entities to enable powerful queries and discovery.
Input:
relations
(Relation[]): Array of relationship objects. Each REQUIRES:
from
(string): Source entity name (must exist)to
(string): Target entity name (must exist)relationType
(string): Relationship type in active voice (works_at, manages, depends_on, uses)project_id
(string, optional): Project name to isolate dataADD factual observations to existing entities.
Input:
observations
(ObservationUpdate[]): Array of observation updates. Each REQUIRES:
entityName
(string): Target entity name (must exist)observations
(string[]): New facts to add, MUST contain ≥1 non-empty stringproject_id
(string, optional): Project name to isolate dataADD status/category tags for INSTANT filtering.
Input:
updates
(TagUpdate[]): Array of tag updates. Each REQUIRES:
entityName
(string): Target entity name (must exist)tags
(string[]): Status/category tags to add (exact-match, case-sensitive)project_id
(string, optional): Project name to isolate dataRETRIEVE complete knowledge graph with all entities and relationships.
Input:
project_id
(string, optional): Project name to isolate dataSEARCH entities by text or tags. SUPPORTS MULTIPLE QUERIES for batch searching.
Input:
query
(string | string[], optional): Search query for text search. Can be a single string or array of strings for multiple object search. OPTIONAL when exactTags is provided for tag-only searches.searchMode
(string, optional): "exact" or "fuzzy" (default: "exact"). Use fuzzy only if exact returns no resultsfuzzyThreshold
(number, optional): Fuzzy similarity threshold. 0.3=default, 0.1=very broad, 0.7=very strict. Lower values find more resultsexactTags
(string[], optional): Tags for exact-match searching (case-sensitive). Use for category filteringtagMatchMode
(string, optional): For exactTags: "any"=entities with ANY tag, "all"=entities with ALL tags (default: "any")page
(number, optional): Page number for pagination (0-based, default: 0)pageSize
(number, optional): Number of results per page (1-1000, default: 50)project_id
(string, optional): Project name to isolate dataExamples:
search_knowledge(query="JavaScript", searchMode="exact")
search_knowledge(query="React", page=0, pageSize=20)
search_knowledge(query="components", page=2, pageSize=100)
search_knowledge(query=["JavaScript", "React"], page=0, pageSize=30)
search_knowledge(query="React", exactTags=["frontend"], page=1, pageSize=25)
search_knowledge(exactTags=["urgent", "bug"], tagMatchMode="all")
- NO QUERY NEEDEDPagination Benefits:
RETRIEVE specific entities by exact names with their interconnections.
Input:
names
(string[]): Array of entity names to retrieveproject_id
(string, optional): Project name to isolate dataPERMANENTLY DELETE entities and all their relationships.
Input:
entityNames
(string[]): Array of entity names to deleteproject_id
(string, optional): Project name to isolate dataREMOVE specific observations from entities while keeping entities intact.
Input:
deletions
(ObservationDeletion[]): Array of deletion requests. Each REQUIRES:
entityName
(string): Target entity nameobservations
(string[]): Specific observations to removeproject_id
(string, optional): Project name to isolate dataUPDATE relationship structure when connections change.
Input:
relations
(Relation[]): Array of relations to delete. Each REQUIRES:
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Exact relationship type to removeproject_id
(string, optional): Project name to isolate dataUPDATE entity status by removing outdated tags.
Input:
updates
(TagUpdate[]): Array of tag removal requests. Each REQUIRES:
entityName
(string): Target entity nametags
(string[]): Outdated tags to remove (exact-match, case-sensitive)project_id
(string, optional): Project name to isolate dataThis project includes comprehensive multi-backend testing to ensure compatibility across both SQLite and PostgreSQL:
Run tests against both backends:
npm run test:multi-backend
Run all tests (original + multi-backend):
npm run test:all-backends
Using Taskfile (if installed):
task test:multi-backend
task test:comprehensive
Clone and setup:
git clone https://github.com/n-r-w/knowledgegraph-mcp.git
cd knowledgegraph-mcp
npm install
npm run build
Run tests:
npm test # All tests including multi-backend
npm run test:unit # Unit tests only
npm run test:performance # Performance benchmarks
If you encounter any issues during setup or usage, please refer to our comprehensive Troubleshooting Guide, which covers:
The guide includes step-by-step solutions for common problems and diagnostic commands to help identify issues.
This is an enhanced version of the official MCP Memory Server with additional features:
MIT License - Feel free to use, modify, and distribute this software.
Allows LLMs to directly interact with a YugabyteDB database.
Immutable ledger database with live synchronization
Interact with Tinybird serverless ClickHouse platform
Read and write access to Airtable databases.
Knowledge graph-based persistent memory system
Create, manage, and update applications on InstantDB, the modern Firebase.
A read-only MCP server to query live Adobe Analytics data. Requires the CData JDBC Driver for Adobe Analytics.
Official MCP server for dbt (data build tool) providing integration with dbt Core/Cloud CLI, project metadata discovery, model information, and semantic layer querying capabilities.
A Model Context Protocol (MCP) server that enables LLMs to interact directly with MongoDB databases
Integrate with Metabase to query databases and visualize data. Requires Metabase URL and API key for authentication.