Neuroplastic Memory
Biologically-inspired memory for Claude. Hebbian plasticity, dream cycles, and temporal decay for persistent knowledge synthesis.
claude-brain
A biologically-inspired persistent memory system for LLMs, modeled on neuroplasticity, consolidation, and sleep.
Mind extracts concepts from conversations, connects them into a persistent knowledge graph, and runs dream cycles that consolidate important memories, discover novel associations, and flag contradictions — the same way NREM and REM sleep shape human memory.
[!NOTE] This is a weekend project, vibe-coded with Claude. It's a working prototype and a playground for ideas at the intersection of neuroscience and LLM memory. This is not production software. Expect rough edges. Contributions welcome.
Architecture
flowchart TB
Input(["Conversation Text"]) --> Consolidation
subgraph Consolidation["Consolidation Pipeline"]
direction LR
Extract["LLM Extraction
+ Affect Signals"] --> Embed["Sentence
Embedding"] --> Dedup["Deduplication
+ Temporal Decay"]
end
Consolidation --> Appraisal
Goals(["Goals"]) -.-> Appraisal
subgraph Appraisal["Appraisal System"]
direction LR
S["Engagement
Questions
Personal Stake
Arousal"] --> Score["Consolidation
Score"]
N["Novelty"] --> Score
F["Frequency"] --> Score
G["Goal
Relevance"] --> Score
end
Appraisal --> KG
subgraph KG["Knowledge Graph"]
Nodes["Concept Nodes"] <--> Edges["Relationship Edges"]
end
KG <--> Dream
subgraph Dream["Dream Engine"]
direction LR
NREM["NREM
Replay"] --> REM["REM
Walks"] --> Wake["Waking
Gate"] --> Threat["Threat
Simulation"]
end
Query(["Query"]) --> KG
KG --> Results(["Ranked Results"])
style Consolidation fill:#d4f0da,stroke:#44cc66,color:#000
style Appraisal fill:#d4e4ff,stroke:#4488ff,color:#000
style KG fill:#e8e8e8,stroke:#888,color:#000
style Dream fill:#ecd4f4,stroke:#aa55dd,color:#000
style Input fill:#4488ff,stroke:#4488ff,color:#fff
style Query fill:#ff8833,stroke:#ff8833,color:#fff
style Results fill:#ff8833,stroke:#ff8833,color:#fff
style Goals fill:#66aa66,stroke:#66aa66,color:#fff
Quick Start
# Install
git clone https://github.com/gammon-bio/claude-brain && cd claude-brain
uv sync
# Set your Anthropic API key
echo "ANTHROPIC_API_KEY=sk-ant-..." > .env
Add the MCP server to your Claude Desktop config (claude_desktop_config.json):
{
"mcpServers": {
"neuroplastic-memory": {
"command": "/absolute/path/to/mind/.venv/bin/python",
"args": ["-m", "mind"],
"cwd": "/absolute/path/to/mind",
"env": {
"PYTHONPATH": "/absolute/path/to/mind/src"
}
}
}
}
Then in Claude Desktop:
You: "Store this: [paste conversation or research notes]"
Claude: → calls memory_store → extracts concepts, builds graph
You: "Run a dream cycle"
Claude: → calls memory_dream → NREM consolidation, REM exploration, threat scan
You: "What do you remember about X?"
Claude: → calls memory_retrieve → ranked results with connection context
Using Memory Across All Conversations
The memory graph persists globally at ~/.neuroplastic-memory/, so it works across any chat or project. To have Claude use it automatically, go to Settings → General and add the following to your Personal Preferences:
I use a neuroplastic memory system via MCP tools.
For every conversation:
1. At the START, call memory_retrieve with my first
message to check for relevant prior context.
2. When I share substantive information — research
findings, technical decisions, strategic insights,
project updates — call memory_store with the key
content. Do not use built-in memory. Use the MCP
tool memory_store.
3. I may ask you to run memory_dream or
memory_dream_report at any time.
This gives you a single shared memory across all conversations. To keep memory siloed to a specific project instead, add the same instructions to that project's Project Instructions rather than your global preferences.
Subsystems
1. Consolidation Pipeline
Turns raw text into graph knowledge. An LLM extracts concepts and relationships (with affect signals — how strongly the user emphasized each idea). Each concept is embedded into a dense vector, checked against existing nodes for deduplication, and integrated into the graph. An exponential temporal decay runs on every edge: recently activated connections survive, stale ones are pruned. This is Hebbian plasticity — connections that fire together wire together, and connections that don't are forgotten.
2. Appraisal System
Every new concept is scored across four channels before entering the graph:
| Channel | Signal | Mechanism |
|---|---|---|
| Salience | Engagement density, question frequency, first-person markers ("I believe", "I need"), arousal (exclamation, caps, strong words), user emphasis from LLM | Behavioral scoring — what the user cares about, not just what they said |
| Novelty | Cosine distance to nearest existing nodes | Embedding-space distance — genuinely new ideas score higher |
| Goal relevance | Cosine similarity to active goal embeddings | Concepts aligned with declared goals are prioritized |
| Frequency | Log-scaled access count relative to the most-accessed node | Frequently retrieved concepts are treated as more important |
These combine into a single consolidation score (weighted sum, configurable) that determines a node's survival fitness — how likely it is to be replayed during NREM and to resist decay.
Conflict boost: If a concept lands near existing contradicts edges, salience receives a +0.3 additive boost that can push the score to 1.0 regardless of other signals. This models an adrenaline-like override — contradictory information triggers immediate alerting, ensuring it isn't lost to decay before the threat simulation phase can flag it.
3. Dream Engine
Offline processing modeled on sleep neuroscience. Four phases run in sequence:
NREM (slow-wave replay): High-salience nodes are replayed and their edge weights are strengthened, mimicking the hippocampal-cortical replay observed in slow-wave sleep. Edges below a pruning threshold are removed.
REM (creative exploration): Biased random walks from seed nodes traverse the graph. At each step, the walker may jump to a semantically similar but topologically distant node — creative teleportation. When two nodes on a walk are embedding-similar but share no edge, a provisional connection is proposed.
Waking evaluation: Provisional edges are re-evaluated at a stricter threshold. Only connections that survive this gate are promoted to real dream_connection edges in the graph. This prevents hallucinated associations from polluting the knowledge base.
Threat simulation: Scans high-confidence nodes for nearby contradicts edges and flags them. These contradiction alerts surface conflicting information that may need resolution.
4. Goal System
Users can declare goals ("understand X", "investigate Y"). Each goal is embedded and persisted. During consolidation, every new concept is scored for relevance against active goals — concepts aligned with what you're trying to learn are prioritized for consolidation and survival. Goals are managed through the memory_goals MCP tool and stored in goals.json alongside the graph.
5. Knowledge Graph
A NetworkX-backed directed graph with typed edges (causes, contradicts, part_of, dream_connection, goal_linked, related_to). Each node carries its embedding, appraisal scores, access count, timestamps, and metadata. The graph persists to JSON and supports similarity search via cosine distance over embeddings.
MCP Tools
| Tool | Description |
|---|---|
memory_store | Ingest conversation text — extracts concepts and relationships via Claude |
memory_retrieve | Query the graph — returns formatted context with inline connections |
memory_dream | Run a full dream cycle (NREM + REM + waking + threat) |
memory_dream_report | Human-readable narrative of the last dream cycle |
memory_goals | Manage research goals — add, list, or remove |
memory_status | Graph statistics (node/edge counts, score distributions) |
memory_stats_detailed | Detailed appraisal breakdown per node, top edges, contradictions, dream edges |
memory_tune | Adjust parameters at runtime (e.g. dream.rem_jump_probability) |
3D Visualization
python3 viz/serve.py [port] [graph_path]
# Defaults: port 8080, graph ~/.neuroplastic-memory/graph.json
Open http://localhost:8080 for an interactive 3D force-directed graph.
Nodes are sized by consolidation score.
| Node color | Origin |
|---|---|
| Blue | conversation — extracted from ingested text |
| Purple | dream — created during dream cycles |
| Green | consolidation — created during consolidation |
| Orange | query — logged from retrieval queries |
Edges are thicker for higher weight. Animated particles appear on edges with weight > 0.5.
| Edge color | Relationship type |
|---|---|
| Gray | related_to |
| Blue | causes |
| Gold | dream_connection |
| Red | contradicts |
| Dark gray | part_of |
| Muted green | goal_linked |
Controls: Refresh, Auto-refresh (30s), Play Dream (replays walk paths with jump markers), Speed slider. Hover any node to see its label, origin, and scores.
Configuration
All parameters are tunable at runtime via memory_tune or in src/mind/schemas/config.py:
Appraisal weights: alpha/beta/gamma/delta (0.25 each) — balance salience, novelty, goal-relevance, and retrieval frequency in the consolidation score.
Salience sub-weights: salience_engagement_weight (0.2), salience_question_weight (0.2), salience_personal_weight (0.3), salience_arousal_weight (0.3) — control which behavioral signals drive salience.
Dream parameters: rem_jump_probability (0.3), rem_walk_steps (10), rem_seed_count (5), waking_threshold (0.35), nrem_salience_threshold (0.3)
Consolidation: decay_constant (0.01), novelty_threshold (0.7)
Persistence
| File | Location |
|---|---|
| Knowledge graph | ~/.neuroplastic-memory/graph.json |
| Dream reports | ~/.neuroplastic-memory/last_dream_report.json |
| Goals | ~/.neuroplastic-memory/goals.json |
Tests
PYTHONPATH=src uv run pytest tests/ -v
161 tests covering appraisal scoring, consolidation pipeline, dream phases, graph operations, goal persistence, affect metadata, MCP tools, and integration.
相關伺服器
CData AlloyDB MCP Server
A read-only MCP server for AlloyDB, enabling LLMs to query live data directly from AlloyDB databases.
Mallory MCP Server
Access real-time cyber and threat intelligence, including details on vulnerabilities, threat actors, and malware.
microCMS MCP Server
Interact with the microCMS headless CMS API, enabling AI assistants to manage content.
MySQL Database Access
Provides read-only access to MySQL databases.
Outreach.io by CData
A read-only MCP server for querying live data from Outreach.io using the CData JDBC Driver.
Seq MCP Server
Search and stream events from a Seq server.
Trino MCP Server
Securely interact with Trino databases to list tables, read data, and execute SQL queries.
Knowledge Graph Memory Server
Enables persistent memory for Claude using a knowledge graph stored in local JSON files.
Node MSSQL
A server for interacting with Microsoft SQL Server databases using the node-mssql library.
CData Excel Online
A read-only MCP server for querying live data from Excel Online using CData's JDBC driver.