Knowledge graph-based persistent memory system
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
Entities are the primary nodes in the knowledge graph. Each entity has:
Example:
{
"name": "John_Smith",
"entityType": "person",
"observations": ["Speaks fluent Spanish"]
}
Relations define directed connections between entities. They are always stored in active voice and describe how entities interact or relate to each other.
Example:
{
"from": "John_Smith",
"to": "Anthropic",
"relationType": "works_at"
}
Observations are discrete pieces of information about an entity. They are:
Example:
{
"entityName": "John_Smith",
"observations": [
"Speaks fluent Spanish",
"Graduated in 2019",
"Prefers morning meetings"
]
}
create_entities
entities
(array of objects)
name
(string): Entity identifierentityType
(string): Type classificationobservations
(string[]): Associated observationscreate_relations
relations
(array of objects)
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship type in active voiceadd_observations
observations
(array of objects)
entityName
(string): Target entitycontents
(string[]): New observations to adddelete_entities
entityNames
(string[])delete_observations
deletions
(array of objects)
entityName
(string): Target entityobservations
(string[]): Observations to removedelete_relations
relations
(array of objects)
from
(string): Source entity nameto
(string): Target entity namerelationType
(string): Relationship typeread_graph
search_nodes
query
(string)open_nodes
names
(string[])Add this to your claude_desktop_config.json:
{
"mcpServers": {
"memory": {
"command": "docker",
"args": ["run", "-i", "-v", "claude-memory:/app/dist", "--rm", "mcp/memory"]
}
}
}
{
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
]
}
}
}
The server can be configured using the following environment variables:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
],
"env": {
"MEMORY_FILE_PATH": "/path/to/custom/memory.json"
}
}
}
}
MEMORY_FILE_PATH
: Path to the memory storage JSON file (default: memory.json
in the server directory)For quick installation, use one of the one-click installation buttons below:
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P
and typing Preferences: Open Settings (JSON)
.
Optionally, you can add it to a file called .vscode/mcp.json
in your workspace. This will allow you to share the configuration with others.
Note that the
mcp
key is not needed in the.vscode/mcp.json
file.
{
"mcp": {
"servers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
]
}
}
}
}
{
"mcp": {
"servers": {
"memory": {
"command": "docker",
"args": [
"run",
"-i",
"-v",
"claude-memory:/app/dist",
"--rm",
"mcp/memory"
]
}
}
}
}
The prompt for utilizing memory depends on the use case. Changing the prompt will help the model determine the frequency and types of memories created.
Here is an example prompt for chat personalization. You could use this prompt in the "Custom Instructions" field of a Claude.ai Project.
Follow these steps for each interaction:
1. User Identification:
- You should assume that you are interacting with default_user
- If you have not identified default_user, proactively try to do so.
2. Memory Retrieval:
- Always begin your chat by saying only "Remembering..." and retrieve all relevant information from your knowledge graph
- Always refer to your knowledge graph as your "memory"
3. Memory
- While conversing with the user, be attentive to any new information that falls into these categories:
a) Basic Identity (age, gender, location, job title, education level, etc.)
b) Behaviors (interests, habits, etc.)
c) Preferences (communication style, preferred language, etc.)
d) Goals (goals, targets, aspirations, etc.)
e) Relationships (personal and professional relationships up to 3 degrees of separation)
4. Memory Update:
- If any new information was gathered during the interaction, update your memory as follows:
a) Create entities for recurring organizations, people, and significant events
b) Connect them to the current entities using relations
c) Store facts about them as observations
Docker:
docker build -t mcp/memory -f src/memory/Dockerfile .
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
A read-only MCP server to query live Adobe Analytics data. Requires the CData JDBC Driver for Adobe Analytics.
Snowflake database integration with read/write capabilities and insight tracking
Connect to a Hologres instance, get table metadata, query and analyze data.
Query Onchain data, like ERC20 tokens, transaction history, smart contract state.
Access the OSV (Open Source Vulnerabilities) database for vulnerability information. Query vulnerabilities by package version or commit, batch query multiple packages, and get detailed vulnerability information by ID.
Interact with the Neon serverless Postgres platform
Server implementation for Google BigQuery integration that enables direct BigQuery database access and querying capabilities
Real-time PostgreSQL & Supabase database schema access for AI-IDEs via Model Context Protocol. Provides live database context through secure SSE connections with three powerful tools: get_schema, analyze_database, and check_schema_alignment.
MCP Server For Apache Doris, an MPP-based real-time data warehouse.
Create, manage, and update applications on InstantDB, the modern Firebase.