amem

The memory layer for AI coding tools. Local-first, semantic, 9 MCP tools with consolidation and project scoping. Works with Claude Code, Cursor, Windsurf & any MCP client.

amem

amem

The memory layer for AI coding tools.
Tell your AI once β€” it remembers everywhere.

npm CI MIT MCP Node 18+


🎯 97.8% R@5⚑ ~14ms p50πŸ›  33 ToolsπŸ”’ 100% Local
LongMemEval-S, 500qFull recall pipelineComplete memory toolkitNo cloud required

Quick Start Β· How It Works Β· Benchmarks Β· Tools Β· Dashboard Β· Architecture


πŸ’‘ The Problem

Every AI tool starts from zero. Every session. Every tool.

- You: "Don't use 'any' in TypeScript"     β†’ told Claude 3 times. Copilot still doesn't know.
- You: "We chose PostgreSQL over MongoDB"   β†’ explained in Cursor. Claude has no idea.
+ With amem: tell it once, every AI tool remembers β€” forever.
See it in action
You (in Claude Code):  "Don't use any type in TypeScript"
  └─ amem stores this as a correction (priority 1.0, confidence 100%)

You (switch to Copilot): starts coding
  └─ Copilot already knows β€” amem feeds it the same correction

You (open Cursor): "What do you remember about TypeScript?"
  └─ Instantly recalls: "Don't use any type" + all related preferences

No cloud. No API keys. One SQLite file. Everything stays on your machine.


πŸš€ Quick Start

Claude Code (recommended)

/plugin marketplace add amanasmuei/amem
/plugin install amem

GitHub Copilot CLI

copilot plugin marketplace add amanasmuei/amem
copilot plugin install amem
πŸ“¦ Cursor / Windsurf / Any MCP Client
npm install -g @aman_asmuei/amem
amem-cli init      # Detects & configures all installed AI tools
amem-cli rules     # Generates extraction rules for proactive memory use

Or add to your MCP config manually:

{
  "mcpServers": {
    "amem": {
      "command": "npx",
      "args": ["-y", "@aman_asmuei/amem"]
    }
  }
}

Verify it works:

amem-cli stats     # Should show "0 memories" initially

πŸ’¬ Tell your AI: "Remember: always use strict TypeScript, never use any type"

πŸ”„ Start a new session: "What do you remember about TypeScript?" β€” it recalls instantly.


🧬 Powered by amem-core

amem is the MCP server. The retrieval engine lives in @aman_asmuei/amem-core.

     Claude Code / Copilot / Cursor / any MCP client
                        β”‚
                        β”‚ MCP (stdio)
                        β–Ό
        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β”‚  @aman_asmuei/amem  (this pkg)   β”‚
        β”‚  33 Tools Β· 7 Resources Β· 2 Prompts
        β”‚  CLI Β· Hooks Β· Dashboard         β”‚
        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚ imports
                        β–Ό
        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β”‚  @aman_asmuei/amem-core          β”‚
        β”‚  Embeddings Β· HNSW Β· Recall      β”‚
        β”‚  Knowledge Graph Β· Reflection    β”‚
        β”‚  97.8% R@5 on LongMemEval-S      β”‚
        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β–Ό
              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              β”‚  SQLite + WAL      β”‚
              β”‚  ~/.amem/memory.db β”‚
              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Why two packages?
PackageRoleInstall
@aman_asmuei/amem (this)MCP server + CLI + hooksnpm i -g @aman_asmuei/amem
@aman_asmuei/amem-corePure TS library, zero MCP depsnpm i @aman_asmuei/amem-core

The same engine powers amem (MCP server), aman-agent (CLI), aman-tg (Telegram bot), and any Node app you give memory to. Retrieval improvements ship via amem-core. MCP-tool changes ship via amem. They version independently.

The 97.8% R@5 headline is the engine quality from amem-core (LongMemEval-S, session-level, 500 questions, zero API calls) β€” exactly what you get whether you call it through MCP or import the library directly.


βš™οΈ How It Works

amem captures knowledge in three layers β€” from fully automatic to fully manual:

LayerHowWhat it does
AutomaticLifecycle hooksCaptures tool observations, auto-extracts corrections/decisions/patterns at session end
AI-drivenExtraction rulesYour AI proactively calls memory_store when you correct it, make decisions, or express preferences
ManualNatural language"Remember: we use PostgreSQL" or "Forget the Redis memory"

Memory Types

PriorityTypeExample
1.0correction"Don't mock the DB in integration tests"
0.85decision"Chose Postgres over Mongo for ACID"
0.7pattern"Prefers early returns over nesting"
0.7preference"Uses pnpm, not npm"
0.5topology"Auth module lives in src/auth/"
0.4fact"API launched January 2025"

Corrections always surface first β€” they are your AI's hard constraints.

πŸ”„ Memory Tiers & Temporal Validity

Memory Tiers

TierBehavior
CoreAlways injected at session start (~500 tokens). Your most critical corrections.
WorkingSession-scoped, auto-surfaced for current task.
ArchivalDefault. Searchable but not auto-injected.

Temporal Validity

Memories aren't forever. When facts change:

  • Old memories get expired (not deleted) β€” preserved for "what was true in March?"
  • Contradictions are auto-detected οΏ½οΏ½οΏ½ storing a new decision auto-expires the old one
  • Query any point in time with memory_since
🧠 Self-Evolving Memory Loop

Your memory doesn't just store β€” it learns from its own structure. Call memory_reflect to trigger the reflection engine:

memory_reflect β†’ Analyzes your entire memory graph
  β”‚
  β”œβ”€ Clusters related memories (HNSW neighbor graph)
  β”œβ”€ Detects contradictions (negation pairs, numerical, low-overlap)
  β”œβ”€ Identifies synthesis candidates
  β”œβ”€ Surfaces knowledge gaps (topics with sparse recall)
  └─ Returns a structured report with suggested actions

The evolution loop:

  1. Reflect β€” memory_reflect clusters your memories and finds patterns
  2. Synthesize β€” AI merges related clusters into higher-order principles via memory_store
  3. Link β€” memory_relate connects syntheses to source memories (tracked via synthesis lineage)
  4. Repeat β€” each cycle, the graph becomes more coherent and abstract

The system auto-nudges when reflection is due (>7 days or >50 new memories since last run).

πŸ“Š What the reflection report looks like
# Memory Reflection Report
Analyzed 127 memories in 12ms
Health Score: 68/100

## Stats
- Clusters: 8 (avg size: 4.2)
- Clustered: 34 | Orphans: 93
- Contradictions: 2
- Synthesis candidates: 3
- Knowledge gaps: 4

## Contradictions Found
⚠ Opposing language detected (23d apart, 87% similar)
  A: a1b2c3d4 "Always use semicolons in JavaScript..."
  B: e5f6g7h8 "Never use semicolons in JavaScript..."
  β†’ Expire older memory a1b2c3d4 β€” newer supersedes it

## Synthesis Candidates
### cluster-0 (4 patterns)
  "These 4 related memories form a cluster about 'typescript, types':
  [patterns]:
    - 'Always use strict TypeScript types'
    - 'Prefer strict null checks'
    - 'Use unknown instead of any'
    - 'Enable strictNullChecks in tsconfig'

  Synthesize into a higher-order principle..."

## Knowledge Gaps
- "kubernetes deployment" β€” asked 3x, avg 25% confidence
- "database migration strategy" β€” asked 2x, avg 0% confidence

πŸ“ˆ Benchmarks

Recall Accuracy (LongMemEval)

All numbers from amem-core v0.5.1 β€” the retrieval engine powering this MCP server. Zero API calls, all local, fully reproducible.

LongMemEval-S (session-level) β€” headline metric

MetricScore
R@195.0%
R@397.0%
R@5πŸ† 97.8%
R@1099.0%

500 questions Β· CPU only Β· zero API calls

LongMemEval Oracle (turn-level)

MetricScore
R@166.2%
R@390.8%
R@594.6%
R@1097.5%

479 scoreable questions Β· 301s runtime Β· Node 22

Pipeline: local bge-small-en-v1.5 bi-encoder + ms-marco-MiniLM-L-6-v2 cross-encoder (int8, batched, default-on). See amem-core benchmarks for full per-type breakdowns, pipeline evolution, and honest notes.

Why this matters for the "rewrite it in Rust" question. The 10.3ms rerank figure above reflects a ~30% speedup over the per-pair implementation it replaced β€” achieved with ~20 lines of batching plus int8 quantization, no native rewrite. The hot paths were already efficient; the remaining wins came from using them more carefully. We stay on TypeScript.

Search Latency

Full recall pipeline (v0.5.1+)

Stagep50Share
Embed (bi-encoder)3.0ms22%
Retrieve (HNSW + multi-strategy)0.1ms1%
Rerank (int8 cross-encoder)10.3ms74%
Total~14ms100%

HNSW index only (vector search)

MemoriesHNSWBrute-forceSpeedup
1000.05ms0.10ms2x
1,0000.06ms0.50ms8x
5,0000.08ms2.44ms30x
10,0000.08ms5.35ms67x

Measured: 100 searches averaged, 384-dim embeddings, top-10 results.

Sub-0.1ms at any scale β€” effectively O(log n). HNSW is an optional dependency; brute-force is used as fallback when unavailable.


πŸ› οΈ Tools Reference

Core Memory (7 tools)

ToolDescription
memory_storeStore a memory with type, tags, confidence. Auto-redacts private content, auto-expires contradictions.
memory_recallSemantic search β€” compact mode by default (~10x token savings). Use memory_detail for full content.
memory_detailRetrieve full content by ID after compact recall.
memory_contextLoad all relevant context for a topic, organized by type with token budgeting.
memory_extractBatch-save multiple memories from conversation.
memory_forgetDelete by ID or query (with confirmation).
memory_injectSurface corrections + decisions + graph neighbors before coding starts.
Precision, History, Advanced, Admin, Reminders, and Maintenance tools (26 more)

Precision & History (5 tools)

ToolDescription
memory_patchSurgical field-level edit with auto-snapshot.
memory_versionsView full edit history or restore any version.
memory_searchExact full-text search via FTS5 with compact mode.
memory_sinceTemporal query with natural language ranges (7d, 2w, 1h).
memory_relateBuild a typed knowledge graph between memories.

Advanced (6 tools)

ToolDescription
memory_multi_recallMulti-strategy search with compact mode: semantic + FTS5 + graph + temporal.
memory_tierMove memories between tiers: core / working / archival.
memory_expireMark as no longer valid β€” preserved for history, excluded from recall.
memory_summarizeStore structured session summary with decisions, corrections, metrics.
memory_historyView past session summaries.
memory_reflectSelf-evolving reflection engine β€” clusters memories, detects contradictions, identifies synthesis candidates, surfaces knowledge gaps.

Admin & Sync (4 tools)

ToolDescription
memory_doctorRun read-only health diagnostics on the amem database.
memory_repairPerform safe, targeted repairs on the amem database.
memory_configGet or set amem configuration with safety guardrails.
memory_syncImport or export memories between amem and other systems (Claude auto-memory, Copilot instructions).

Reminders (4 tools)

ToolDescription
reminder_setCreate reminder with optional deadline and scope.
reminder_listList active (or all) reminders, filterable by scope.
reminder_checkShow overdue, today, and upcoming (7 days).
reminder_completeMark as done (supports partial ID).

Log & Maintenance (7 tools)

ToolDescription
memory_logAppend raw conversation turns (lossless, append-only).
memory_log_recallSearch or replay log by session, keyword, or recency.
memory_log_cleanupPrune old entries with configurable retention.
memory_statsCounts, type breakdown, confidence distribution.
memory_exportExport as Markdown or JSON.
memory_importBulk import from JSON with automatic dedup.
memory_consolidateMerge duplicates, prune stale, promote frequent, decay idle.

πŸ“– Usage Guide

Storing Memories

Natural language (easiest)

"Remember: we use PostgreSQL, not MongoDB"
"Store a correction: never use console.log in production"
"Note that the auth module is in src/auth/"

Explicit tool calls

memory_store({
  content: "Never use 'any' β€” define proper interfaces",
  type: "correction",
  tags: ["typescript"],
  confidence: 1.0
})

Recalling Memories

// Step 1: Compact index β€” ~50-100 tokens (default)
memory_recall({ query: "auth decisions", limit: 5 })
// -> a1b2c3d4 [decision] Auth service uses JWT tokens... (92%)
// -> e5f6g7h8 [correction] Never store tokens in localStorage... (100%)

// Step 2: Full details only for what you need
memory_detail({ ids: ["a1b2c3d4", "e5f6g7h8"] })
More search options
// Multi-strategy: semantic + FTS5 + graph + temporal
memory_multi_recall({
  query: "authentication architecture",
  limit: 10,
  weights: { semantic: 0.4, fts: 0.3, graph: 0.15, temporal: 0.15 }
})

// Exact keyword search (FTS5 syntax)
memory_search({ query: "OAuth PKCE" })
memory_search({ query: '"event sourcing"' })     // phrase match
memory_search({ query: "auth* NOT legacy" })      // boolean

Managing Memories

Edit, expire, promote, link
// Surgical edit with auto-snapshot for rollback
memory_patch({ id: "a1b2c3d4", field: "content", value: "Updated text", reason: "clarified" })

// View edit history / restore
memory_versions({ memory_id: "a1b2c3d4" })

// Expire (preserve for history, exclude from recall)
memory_expire({ id: "a1b2c3d4", reason: "Migrated to GraphQL" })

// Promote to core tier (always loaded at session start)
memory_tier({ id: "a1b2c3d4", tier: "core" })

// Link related memories (graph builds itself, but you can add manual links)
memory_relate({ action: "relate", from_id: "abc", to_id: "xyz", relation_type: "supports" })

Relation types: supports, contradicts, depends_on, supersedes, related_to, caused_by, implements β€” or define your own.

Reminders

Cross-session deadline tracking
reminder_set({ content: "Review PR #42", due_at: 1743033600000, scope: "global" })

reminder_check({})
// -> [OVERDUE] Review PR #42
// -> [TODAY] Deploy auth service
// -> [upcoming] Write quarterly report

reminder_complete({ id: "a1b2c3d4" })

Privacy

Automatic redaction
// Private blocks stripped before storage
memory_store({
  content: "DB password is <private>hunter2</private>, connect to prod at db.example.com",
  type: "topology", tags: ["database"]
})
// Stored: "DB password is [REDACTED], connect to prod at db.example.com"

// API keys, tokens, passwords auto-redacted by pattern matching
// Configure patterns in ~/.amem/config.json

βš”οΈ Honest Comparison: amem vs graphify

Click to expand β€” how amem compares to graphify

graphify is the most common "what about X?" when people find amem. They solve fundamentally different problems and are genuinely complementary.

What each tool does

amemgraphify
One-linerPersistent memory across AI sessionsCodebase β†’ knowledge graph
Core question"What has my AI learned about me?""What does this codebase look like?"
InputNatural language (corrections, decisions, preferences)Files (code, docs, PDFs, images, video)
OutputRecalled memories ranked by relevanceStructural graph + report + interactive HTML
PersistenceAlways β€” memory survives across sessions and toolsSnapshot β€” graph.json persists, but doesn't learn over time
When it runsContinuously, every sessionOn-demand (/graphify .) or on commit via git hook

Technical comparison

amemgraphify
RuntimeTypeScript / Node (β‰₯18)Python (β‰₯3.10)
ProtocolMCP server (33 tools, 7 resources)AI skill (slash command) + optional MCP server
StorageSQLite + FTS5 + WALNetworkX graph β†’ JSON file
SearchSemantic embeddings + FTS5 + graph + rerankingGraph traversal (BFS/DFS) + node lookup
EmbeddingsLocal bge-small-en-v1.5 (384-dim)None β€” uses graph topology, not vector similarity
Code understandingNone β€” stores what you tell itDeep β€” tree-sitter AST for 25 languages
MultimodalText onlyCode, docs, PDFs, images, video, audio
LLM requiredNo (all local)Yes for docs/images (code is LLM-free via tree-sitter)
Benchmark97.8% R@5 on LongMemEval-S71.5x token reduction vs raw file reading
AI tool supportClaude Code, Copilot, Cursor, any MCP clientClaude Code, Codex, Copilot, Cursor, Gemini, Aider, Kiro, +10 more

Where each wins

amem wins at:

  • Remembering your preferences, corrections, and decisions across projects and tools
  • Semantic recall β€” finding the right memory from a vague query (97.8% R@5)
  • Temporal intelligence β€” tracking what was true when, auto-expiring contradictions
  • Self-evolution β€” reflection engine clusters, detects contradictions, identifies gaps
  • Zero LLM dependency β€” everything runs locally, no API calls

graphify wins at:

  • Understanding code structure β€” call graphs, imports, class hierarchies, cross-file relationships
  • Multimodal ingestion β€” drop in code, papers, screenshots, videos, it graphs them all
  • Token efficiency β€” 71.5x compression means your AI reads structure, not raw files
  • Breadth of language support β€” 25 programming languages via tree-sitter AST
  • Breadth of AI tool support β€” 15+ platforms with dedicated install commands

Honest takeaways

  1. They don't compete. amem remembers your knowledge (decisions, corrections, preferences). graphify maps the codebase's structure (call graphs, dependencies, architecture). Different data, different access patterns.

  2. Use both if you want. Run graphify . to get a structural map of your project. Use amem to remember "we chose this architecture because X." The graph tells your AI what exists. The memory tells it why things are that way.

  3. graphify has broader platform coverage (15+ AI tools). amem has deeper integration where it works (MCP protocol with 33 tools, structured resources, prompts).

  4. graphify needs an LLM for non-code files. amem is fully local β€” no API calls, no model inference beyond the local embedding model.

  5. The real choice depends on your pain point. If your AI keeps forgetting your preferences and decisions β†’ amem. If your AI can't navigate your codebase efficiently β†’ graphify. If both β†’ use both.


🌐 Platform Compatibility

FeatureClaude CodeGitHub Copilot CLICursor / Windsurf / Other
One-command plugin installYesYes--
33 MCP toolsYesYesYes
AI skills147--
Auto-capture hooksYesYes--
Session auto-summarizeYesYes--
Auto-memory syncYes----
CLI setup (amem-cli init)YesYesYes

Claude Code has the deepest integration (plugin + hooks + auto-memory sync). Copilot CLI is a close second. Other MCP clients get the full 33-tool server via manual config.

AI Skills

Available skills by platform
What you saySkillClaude CodeCopilot CLI
"Remember never use any type"rememberYesYes
"What do you remember about auth?"recallYesYes
"Load context for this task"contextYesYes
"Show memory stats"statsYesYes
"Run memory doctor"doctorYesYes
"Export my memories"exportYesYes
"List all corrections"listYesYes
"Sync my Claude memory"syncYes--
"Open the memory dashboard"dashboardYes--
"Install hooks"hooksYes--

πŸ”„ Working with Claude Code Auto-Memory

amem complements Claude's built-in auto-memory β€” it doesn't replace it.

Claude auto-memoryamem
CaptureAutomatic, zero configTyped with confidence scores
StorageSingle markdown fileSQLite with search, graph, temporal
RecallEntire file loaded every sessionOnly relevant memories surfaced
HistoryOverwritten on updateVersioned, temporal validity
SearchNoneSemantic + FTS5 + graph + reranking

Recommended: Keep both enabled. Run amem-cli sync to import Claude's memories into amem for unified, structured access.

Claude β†’ amem sync
amem-cli sync              # Import all projects
amem-cli sync --dry-run    # Preview what would be imported
amem-cli sync --project myapp  # Import specific project
Claude typeamem typeConfidence
feedbackcorrection1.0
projectdecision0.85
userpreference0.8
referencetopology0.7
amem β†’ Copilot sync

Export amem memories to .github/copilot-instructions.md so Copilot reads them as persistent context:

amem-cli sync --to copilot              # Export to current project
amem-cli sync --to copilot --dry-run    # Preview without writing
amem-cli sync --to copilot --project /path/to/repo

This generates structured markdown grouped by priority:

  1. Corrections (MUST follow) β€” hard constraints
  2. Decisions β€” architectural choices
  3. Preferences β€” user preferences
  4. Patterns β€” coding conventions
  5. Context β€” topology + facts

The amem section is wrapped in <!-- amem:start/end --> markers β€” existing non-amem content in the file is preserved.

Cross-tool sync: Decisions made in Claude sessions automatically inform Copilot:

Claude Code β†’ amem sync β†’ amem DB β†’ amem sync --to copilot β†’ copilot-instructions.md

πŸ“Š Dashboard & Knowledge Graph

amem-cli dashboard              # Opens at localhost:3333
amem-cli dashboard --port=8080  # Custom port

Full-featured web dashboard with:

  • πŸ” Memory browser β€” search, filter by type/tier/source, inline actions (promote, demote, expire)
  • πŸ•ΈοΈ Interactive knowledge graph β€” zoom, pan, click-to-focus with neighborhood highlighting, detail panel, search, directional edges
  • πŸ“ˆ Analytics β€” confidence distribution, type breakdown, session timeline
  • ⏰ Reminders β€” view and manage cross-session tasks
  • πŸ“‹ Copilot Preview β€” see what would be exported to copilot-instructions.md

πŸ’» CLI Reference

# Setup
amem-cli init                          # Auto-configure AI tools
amem-cli rules                         # Generate extraction rules
amem-cli hooks                         # Install hooks for Claude Code
amem-cli hooks --target copilot        # Install hooks for GitHub Copilot CLI
amem-cli hooks --uninstall             # Remove hooks
amem-cli sync                          # Import Claude auto-memory β†’ amem
amem-cli sync --to copilot             # Export amem β†’ copilot-instructions.md
amem-cli doctor                        # Health diagnostics
amem-cli repair                        # Repair corrupted database from backups

# Dashboard
amem-cli dashboard                     # Web dashboard (localhost:3333)

# Memory operations
amem-cli recall "authentication"       # Semantic search
amem-cli stats                         # Statistics
amem-cli list --type correction        # List by type
amem-cli export --file memories.md     # Export to file
amem-cli forget abc12345               # Delete by short ID
amem-cli reset --confirm               # Wipe all data

πŸ— Architecture

                        Your AI Tool
           Claude Code / Copilot CLI / any MCP client
                    β”‚                β”‚
                    β”‚ MCP (stdio)    β”‚ Lifecycle Hooks
                    β–Ό                β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚   @aman_asmuei/amem             β”‚  ← this package
          β”‚                                 β”‚
          β”‚  33 Tools Β· 7 Resources Β· 2 Prompts
          β”‚  Slash commands Β· CLI Β· Hooks   β”‚
          β”‚  Config: ~/.amem/config.json    β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β”‚ imports
                           β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚   @aman_asmuei/amem-core        β”‚  ← the engine
          β”‚                                 β”‚
          β”‚  Multi-Strategy Retrieval       β”‚
          β”‚  [HNSW] + [FTS5] + [Graph] + [Temporal]
          β”‚       + query expansion         β”‚
          β”‚       + cross-encoder reranker   β”‚
          β”‚                                 β”‚
          β”‚  Self-Evolving Reflection       β”‚
          β”‚  [Clustering] + [Contradictions]β”‚
          β”‚  + [Synthesis] + [Gap Detection]β”‚
          β”‚                                 β”‚
          β”‚  Embeddings: bge-small-en-v1.5  β”‚
          β”‚  Reranker: ms-marco-MiniLM int8 β”‚
          β”‚  97.8% R@5 on LongMemEval-S     β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β”‚
                           β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚   SQLite + WAL + FTS5           β”‚
          β”‚   ~/.amem/memory.db             β”‚
          β”‚                                 β”‚
          β”‚   memories       (tiered)       β”‚
          β”‚   conversation_log (raw)        β”‚
          β”‚   memory_versions (history)     β”‚
          β”‚   memory_relations (graph)      β”‚
          β”‚   synthesis_lineage             β”‚
          β”‚   knowledge_gaps                β”‚
          β”‚   session_summaries             β”‚
          β”‚   reminders                     β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The amem MCP server is a thin wrapper around amem-core. The retrieval engine, embeddings, knowledge graph, reflection β€” all live in amem-core and version independently. Bug in MCP wiring? Republish amem. Recall improvement? Republish amem-core. No coupling.

Ranking Formula

score = relevance x 0.45 + recency x 0.2 + confidence x 0.2 + importance x 0.15
FactorHow it works
RelevanceCosine similarity via HNSW index; query-expanded keyword fallback
RecencyExponential decay (0.995^hours)
ConfidenceReinforced by repeated confirmation (0-1)
ImportanceType-based: corrections 1.0 ... facts 0.4

Additive scoring ensures no single low factor kills the ranking.


βš™οΈ Configuration

Environment variables
VariableDefaultDescription
AMEM_DIR~/.amemStorage directory
AMEM_DB~/.amem/memory.dbDatabase path
AMEM_PROJECT(auto from git)Project scope override
Config file (~/.amem/config.json)

Created automatically with defaults:

{
  "retrieval": {
    "semanticWeight": 0.4,
    "ftsWeight": 0.3,
    "graphWeight": 0.15,
    "temporalWeight": 0.15,
    "rerankerEnabled": true
  },
  "privacy": {
    "enablePrivateTags": true,
    "redactPatterns": ["..."]
  },
  "tiers": {
    "coreMaxTokens": 500,
    "workingMaxTokens": 2000
  },
  "hooks": {
    "enabled": true,
    "captureToolUse": true,
    "captureSessionEnd": true
  }
}
πŸ“‹ Version history

v0.23.0 β€” Interactive Knowledge Graph Dashboard

Full-width graph explorer with zoom/pan, click-to-focus neighborhood highlighting, detail panel with relation navigation, search & filter, directional edges, force-directed layout. Admin tools (doctor, repair, config, sync). 255 tests across 18 suites.

v0.19.0 β€” Self-Evolving Memory Loop

Reflection engine with HNSW-based clustering, 3-layer contradiction detection (negation + numerical + low-overlap), synthesis candidates with lineage tracking, knowledge gap detection, utility scoring, auto-trigger nudge in memory_inject. New DB tables: synthesis_lineage, knowledge_gaps, reflection_meta. Migration v5.

v0.18.0 β€” Progressive Disclosure & Scale

HNSW vector index (67x faster at 10k), compact mode default on recall/search, DB repair CLI, concurrent access safety, heuristic conversation extractor, session-end auto-extraction.

v0.13.0 β€” World-Class Recall

bge-small-en-v1.5 embeddings, additive scoring, query expansion, auto-relate knowledge graph, graph-aware injection, amem doctor, CI benchmarks.

v0.9.x β€” Temporal Intelligence

Temporal validity, auto-expire contradictions, multi-strategy retrieval, cross-encoder reranking, memory tiers, privacy tags, lifecycle hooks, session summaries, dashboard, config system.

v0.7.0 β€” v0.8.0

Import/export, confidence decay, embedding cache, multi-process safety, auto-configure CLI, dashboard.

v0.1.0 β€” v0.5.x

Core store/recall, local embeddings, SQLite + WAL, consolidation, project scoping, reminders, conversation log, knowledge graph, FTS5, progressive disclosure.


🧰 Tech Stack

LayerTechnology
ProtocolMCP SDK ^1.25
LanguageTypeScript 5.6+, strict mode
DatabaseSQLite + WAL + FTS5
EmbeddingsHuggingFace bge-small-en-v1.5 (local, 80MB) + HNSW vector index
Rerankingms-marco-MiniLM-L-6-v2 (default-on, int8, batched, local)
ValidationZod 3.25+ with .strict() schemas
TestingVitest β€” 281 tests across 19 suites + recall benchmarks
CI/CDGitHub Actions, npm publish on release

🀝 Contributing

git clone https://github.com/amanasmuei/amem.git
cd amem && npm install
npm run build   # zero TS errors
npm test        # 281 tests pass

PRs must pass CI before merge. See Issues for open tasks.



Built with ❀️ in πŸ‡²πŸ‡Ύ Malaysia by Aman Asmuei

GitHub npm Issues

MIT License · Star ⭐ if amem saves your AI from amnesia

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension