SAME (Stateless Agent Memory Engine

Your AI's memory shouldn't live on someone else's server — 12 MCP tools that give it persistent context from your local markdown, no cloud, no API keys, single binary.

SAME — Stateless Agent Memory Engine

License: BSL 1.1 Go Latest Release GitHub Stars MCP Tools Discord

Your AI forgets everything between sessions. Not anymore.

Every time you start a new session with Claude Code, Cursor, or any AI coding tool, your agent starts from zero. Decisions you made yesterday? Gone. Context from last week? Gone. That architectural choice you spent 30 minutes discussing? You'll explain it again.

SAME gives your AI persistent memory from your existing markdown notes (any folder of .md files — no Obsidian required). No cloud. No API keys. One binary.

See it in 60 seconds

curl -fsSL statelessagent.com/install.sh | bash
same demo

same demo creates a temporary vault with sample notes, runs semantic search, and shows your AI answering questions from your notes — all locally, no accounts, no API keys.


Quickstart

# 1. Install (pick one)
curl -fsSL statelessagent.com/install.sh | bash   # direct binary
npm install -g @sgx-labs/same                      # or via npm

# 2. Point SAME at your project
cd ~/my-project && same init

# 3. Ask your notes a question
same ask "what did we decide about authentication?"

# 4. Your AI now remembers (hooks + MCP tools active)
# Start Claude Code, Cursor, or any MCP client — context surfaces automatically

That's it. Your AI now has memory.


Add to Your AI Tool

Claude Code (hooks + MCP — full experience)

same init          # sets up hooks + MCP in one step

SAME installs 6 Claude Code hooks automatically. Context surfaces on every session start. Decisions extracted on stop. No config file to edit.

Claude Code / Cursor / Windsurf (MCP only)

Or add manually to your MCP config (.mcp.json, .claude/settings.json, Cursor MCP settings):

{
  "mcpServers": {
    "same": {
      "command": "npx",
      "args": ["-y", "@sgx-labs/same", "mcp", "--vault", "/absolute/path/to/your/notes"]
    }
  }
}

Replace /absolute/path/to/your/notes with the actual path to your project or notes directory. 12 tools available instantly. Works without Ollama (keyword fallback).


Why SAME

ProblemWithout SAMEWith SAME
New session startsRe-explain everythingAI picks up where you left off
"Didn't we decide to use JWT?"Re-debate for 10 minutesDecision surfaces automatically
Switch between projectsManually copy contextEach project has its own memory
Close terminal accidentallyAll context lostNext session recovers via handoff
Ask about your own notesCopy-paste into chatsame ask with source citations
Context compacted mid-taskAI restarts from scratchPinned notes + handoffs survive compaction

The Numbers

MetricValueWhat it means
Retrieval precision99.5%When SAME surfaces a note, it's almost always the right one
MRR0.949The right note surfaces first, almost every time
Coverage90.5%9 out of 10 relevant notes found
Prompt overhead<200msYou won't notice it
Binary size~10MBSmaller than most npm packages
Setup time<60 secondsOne curl command

Benchmarked against 105 ground-truth test cases. Methodology


How It Works

┌─────────────┐     ┌──────────┐     ┌──────────┐     ┌─────────────────┐
│  Your Notes │     │  Ollama  │     │  SQLite  │     │  Your AI Tool   │
│   (.md)     │────>│ (embed)  │────>│ (search) │────>│ Claude / Cursor │
│             │     │ local    │     │ + FTS5   │     │ via Hooks + MCP │
└─────────────┘     └──────────┘     └──────────┘     └─────────────────┘
                                          │                    │
                                     ┌────▼────┐          ┌────▼────┐
                                     │ Ranking │          │  Write  │
                                     │ Engine  │          │  Side   │
                                     └─────────┘          └─────────┘
                                     semantic +           decisions,
                                     recency +            handoffs,
                                     confidence           notes

Your markdown notes are embedded locally via Ollama and stored in a SQLite database with vector search. When your AI tool starts a session, SAME surfaces relevant context automatically. Decisions get extracted. Handoffs get generated. The next session picks up where you left off. Everything stays on your machine.

No Ollama? No problem. SAME Lite runs with zero external dependencies. Keyword search via SQLite FTS5 powers all features. Install Ollama later and same reindex upgrades to semantic mode instantly.


Features

FeatureDescriptionRequires Ollama?
Semantic searchFind notes by meaning, not keywordsYes
Keyword search (FTS5)Full-text search fallbackNo
same ask (RAG)Ask questions, get cited answers from your notesYes (chat model)
Session handoffsAuto-generated continuity notesNo
Session recoveryCrash-safe — next session picks up even if terminal closedNo
Decision extractionArchitectural choices remembered across sessionsNo
Pinned notesCritical context always includedNo
File claims (same claim)Advisory read/write ownership for multi-agent coordinationNo
Context surfacingRelevant notes injected into AI promptsNo*
same demoTry SAME in 60 secondsNo
same tutorial6 hands-on lessonsNo
same doctor18 diagnostic checksNo
Push protectionSafety rails for multi-agent workflowsNo
same seed installOne-command install of pre-built knowledge vaultsNo*
Cross-vault federationSearch across all vaults at onceNo*
MCP server (12 tools)Works with any MCP clientNo*
Privacy tiers_PRIVATE/ never indexed, research/ never committedNo

*Semantic mode requires Ollama; keyword fallback is automatic.


Seed Vaults

Pre-built knowledge vaults that give your AI expert-level context in one command.

same seed install claude-code-power-user
SeedNotesWhat you get
claude-code-power-user52Master-level Claude Code patterns, workflows, and tricks
ai-agent-architecture58Agent design patterns, orchestration, memory strategies
personal-productivity-os118GTD, time blocking, habit systems, review frameworks

10 seeds available — 622+ notes of expert knowledge. Browse with same seed list.

Browse all seeds


MCP Tools

SAME exposes 12 tools via MCP for any compatible client.

Read

ToolWhat it does
search_notesSemantic search across your knowledge base
search_notes_filteredSearch with domain/workstream/tag/agent filters
search_across_vaultsFederated search across multiple vaults
get_noteRead full note content by path
find_similar_notesDiscover related notes by similarity
get_session_contextPinned notes + latest handoff + recent activity + git state + active claims
recent_activityRecently modified notes
reindexRe-scan and re-index the vault
index_statsIndex health and statistics

Write

ToolWhat it does
save_noteCreate or update a markdown note (auto-indexed, optional agent attribution)
save_decisionLog a structured project decision (optional agent attribution)
create_handoffWrite a session handoff for the next session (optional agent attribution)

Your AI can now write to its own memory, not just read from it. Decisions persist. Handoffs survive. Every session builds on the last.


Works With

ToolIntegrationExperience
Claude CodeHooks + MCPFull (automatic context surfacing + 12 tools)
CursorMCP12 tools for search, write, session management
WindsurfMCP12 tools for search, write, session management
ObsidianVault detectionIndexes your existing vault
LogseqVault detectionIndexes your existing vault
Any MCP clientMCP server12 tools via stdio transport

SAME works with any directory of .md files. No Obsidian required.

Use same init --mcp-only to skip Claude Code hooks and just register the MCP server.


SAME vs. Alternatives

SAMEmem0LettaBasic Memorydoobidoo
Setup1 commandpip + configDocker + PGpip + configpip + ChromaDB
Runtime depsNonePython + LLM APIDocker + PG + LLMPythonPython + ChromaDB
Offline capableFull (Lite mode)NoNoPartialYes
Cloud requiredNoDefault yesYesNoNo
TelemetryNoneDefault ONUnknownNoneNone
MCP tools124-60 (REST)7+24
Hook integrationYes (Claude Code)NoNoNoNo
Session continuityHandoffs + pins + recoverySession-scopedCore featureNoNo
Published benchmarksP=0.995, MRR=0.949Claims "26% better"NoneNoneNone
Binary size~10MB~100MB+ (Python)~500MB+ (Docker)~50MB+~80MB+
LanguageGoPythonPythonPythonPython
LicenseBSL 1.1 [1]Apache 2.0Apache 2.0MITMIT

[1] BSL 1.1: Free for personal, educational, hobby, research, and evaluation use. Converts to Apache 2.0 on 2030-02-02.


Privacy by Design

SAME creates a three-tier privacy structure:

DirectoryIndexed?Committed?Use for
Your notesYesYour choiceDocs, decisions, research
_PRIVATE/NoNoAPI keys, credentials, secrets
research/YesNoStrategy, analysis — searchable but local-only

Privacy is structural — filesystem-level, not policy-based. same init creates a .gitignore that enforces these boundaries automatically.

Security hardening: Path traversal blocked across all tools. Dot-directory writes blocked. Symlink escapes prevented. Error messages sanitized — no internal paths leak to AI. Config files written with owner-only permissions (0o600). Ollama URL validated to localhost-only. Prompt injection patterns scanned before context injection. Push protection available for multi-agent workflows.


Install

# macOS / Linux
curl -fsSL statelessagent.com/install.sh | bash

# Or via npm (any platform — downloads prebuilt binary)
npm install -g @sgx-labs/same

# Windows (PowerShell)
irm statelessagent.com/install.ps1 | iex

If blocked by execution policy, run first: Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

If you'd rather not pipe to bash, or you're having an AI assistant install for you:

macOS (Apple Silicon):

mkdir -p ~/.local/bin
curl -fsSL https://github.com/sgx-labs/statelessagent/releases/latest/download/same-darwin-arm64 -o ~/.local/bin/same
chmod +x ~/.local/bin/same
export PATH="$HOME/.local/bin:$PATH"  # add to ~/.zshrc to persist
same init --yes

macOS (Intel): Build from source (see below) or use Rosetta: arch -arm64 ./same-darwin-arm64

Linux (x86_64):

mkdir -p ~/.local/bin
curl -fsSL https://github.com/sgx-labs/statelessagent/releases/latest/download/same-linux-amd64 -o ~/.local/bin/same
chmod +x ~/.local/bin/same
export PATH="$HOME/.local/bin:$PATH"
same init --yes

Build from source (any platform):

git clone --depth 1 https://github.com/sgx-labs/statelessagent.git
cd statelessagent && make install
same init --yes

Requires Go 1.25+ and CGO.


CommandDescription
same initSet up SAME for your project (start here)
same demoSee SAME in action with sample notes
same tutorialLearn SAME features hands-on (6 lessons)
same ask <question>Ask a question, get cited answers from your notes
same search <query>Search your notes
same search --all <query>Search across all registered vaults
same related <path>Find related notes
same statusSee what SAME is tracking
same doctorRun 18 diagnostic checks
same claim <path> --agent <name>Create an advisory write claim for a file
same claim --read <path> --agent <name>Declare a read dependency on a file
same claim --listShow active read/write claims
same claim --release <path> [--agent <name>]Release claims for a file
same pin <path>Always include a note in every session
same pin listShow pinned notes
same pin remove <path>Unpin a note
same feedback <path> up|downRate note helpfulness
same repairBack up and rebuild the database
same reindex [--force]Rebuild the search index
same display full|compact|quietControl output verbosity
same profile use precise|balanced|broadAdjust precision vs. coverage
same modelShow current embedding model and alternatives
same model use <name>Switch embedding model
same config showShow configuration
same config editOpen config in editor
same setup hooksInstall Claude Code hooks
same setup mcpRegister MCP server
same hooksShow hook status and descriptions
same seed listBrowse available seed vaults
same seed install <name>Download and install a seed vault
same seed info <name>Show seed details
same seed remove <name>Uninstall a seed vault
same vault list|add|remove|defaultManage multiple vaults
same vault rename <old> <new>Rename a vault alias
same vault feed <source>Propagate notes from another vault (with PII guard)
same guard settings set push-protect onEnable push protection
same push-allowOne-time push authorization
same watchAuto-reindex on file changes
same budgetContext utilization report
same logRecent SAME activity
same statsIndex statistics
same updateUpdate to latest version
same version [--check]Version and update check

SAME uses .same/config.toml, generated by same init:

[vault]
path = "/home/user/notes"
# skip_dirs = [".venv", "build"]
# noise_paths = ["experiments/", "raw_outputs/"]
handoff_dir = "sessions"
decision_log = "decisions.md"

[ollama]
url = "http://localhost:11434"

[embedding]
provider = "ollama"           # "ollama" (default), "openai", or "openai-compatible"
model = "nomic-embed-text"    # see supported models below
# api_key = ""                # required for openai, or set SAME_EMBED_API_KEY

[memory]
max_token_budget = 800
max_results = 2
distance_threshold = 16.2
composite_threshold = 0.65

[hooks]
context_surfacing = true
decision_extractor = true
handoff_generator = true
feedback_loop = true
staleness_check = true

Supported embedding models (auto-detected dimensions):

ModelDimsNotes
nomic-embed-text768Default. Great balance of quality and speed
snowflake-arctic-embed2768Recommended upgrade. Best retrieval in its size class
mxbai-embed-large1024Highest overall MTEB average
all-minilm384Lightweight (~90MB). Good for constrained hardware
snowflake-arctic-embed1024v1 large model
embeddinggemma768Google's Gemma-based embeddings
qwen3-embedding1024Qwen3 with 32K context
nomic-embed-text-v2-moe768MoE upgrade from nomic
bge-m31024Multilingual (BAAI)
text-embedding-3-small1536OpenAI cloud API

Any model not listed works too — set dimensions explicitly with SAME_EMBED_DIMS.

Configuration priority (highest wins):

  1. CLI flags (--vault)
  2. Environment variables (VAULT_PATH, OLLAMA_URL, SAME_*)
  3. Config file (.same/config.toml)
  4. Built-in defaults
VariableDefaultDescription
VAULT_PATHauto-detectPath to your markdown notes
OLLAMA_URLhttp://localhost:11434Ollama API (must be localhost)
SAME_DATA_DIR<vault>/.same/dataDatabase location
SAME_HANDOFF_DIRsessionsHandoff notes directory
SAME_DECISION_LOGdecisions.mdDecision log path
SAME_EMBED_PROVIDERollamaEmbedding provider (ollama, openai, or openai-compatible)
SAME_EMBED_MODELnomic-embed-textEmbedding model name
SAME_EMBED_BASE_URL(provider default)Base URL for embedding API (e.g. http://localhost:8080 for local servers)
SAME_EMBED_API_KEY(none)API key (required for openai, optional for openai-compatible)
SAME_SKIP_DIRS(none)Extra dirs to skip (comma-separated)
SAME_NOISE_PATHS(none)Paths filtered from context surfacing (comma-separated)

Control how much SAME shows when surfacing context:

ModeCommandDescription
fullsame display fullBox with note titles, match terms, token counts (default)
compactsame display compactOne-line summary: "surfaced 2 of 847 memories"
quietsame display quietSilent — context injected with no visual output

Display mode is saved to .same/config.toml and takes effect on the next prompt.

Prevent accidental git pushes when running multiple AI agents on the same machine.

# Enable push protection
same guard settings set push-protect on

# Before pushing, explicitly allow it
same push-allow

# Check guard status
same guard status

When enabled, a pre-push git hook blocks pushes unless a one-time ticket has been created via same push-allow. Tickets expire after 30 seconds by default (configurable via same guard settings set push-timeout N).

Start with same doctor — it runs 18 checks and tells you exactly what's wrong.

"No vault found" SAME can't find your notes directory. Fix:

  • Run same init from inside your notes folder
  • Or set VAULT_PATH=/path/to/notes in your environment
  • Or use same vault add myproject /path/to/notes

"Ollama not responding" The embedding provider is unreachable. Fix:

  • Check if Ollama is running (look for the llama icon)
  • Test with: curl http://localhost:11434/api/tags
  • If using a non-default port, set OLLAMA_URL=http://localhost:<port>
  • SAME will automatically fall back to keyword search if Ollama is temporarily down

Hooks not firing Context isn't being surfaced during Claude Code sessions. Fix:

  • Run same setup hooks to reinstall hooks
  • Verify with same status (hooks should show as "active")
  • Check .claude/settings.json exists in your project

Context not surfacing Hooks fire but no notes appear. Fix:

  • Run same doctor to diagnose all 18 checks
  • Run same reindex if your notes have changed
  • Try same search "your query" to test search directly
  • Check if display mode is set to "quiet": same config show

"Cannot open SAME database" The SQLite database is missing or corrupted. Fix:

  • Run same repair to back up and rebuild automatically
  • Or run same init to set up from scratch
  • Or run same reindex --force to rebuild the index

SAME's retrieval is tuned against 105 ground-truth test cases — real queries paired with known-relevant notes.

MetricValueMeaning
Precision99.5%Surfaced notes are almost always relevant
Coverage90.5%Finds ~9/10 relevant notes
MRR0.949Most relevant note is usually first
BAD cases0Zero irrelevant top results

Tuning constants: maxDistance=16.3, minComposite=0.70, gapCap=0.65. Shared between hooks and MCP via ranking.go.

All evaluation uses synthetic vault data with known relevance judgments. No user data is used.


FAQ

Do I need Obsidian? No. Any directory of .md files works.

Do I need Ollama? Recommended, not required. Semantic search understands meaning; without Ollama, SAME falls back to keyword search (FTS5). You can also use OpenAI embeddings (SAME_EMBED_PROVIDER=openai) or any OpenAI-compatible server like llama.cpp, VLLM, or LM Studio (SAME_EMBED_PROVIDER=openai-compatible). If your embedding server goes down temporarily, SAME falls back to keywords automatically.

Does it slow down my prompts? 50-200ms. Embedding is the bottleneck — search and scoring take <5ms.

Is my data sent anywhere? SAME is fully local. Context surfaced to your AI tool is sent to that tool's API as part of your conversation, same as pasting it manually.

How much disk space? 5-15MB for a few hundred notes.

What are seeds? Pre-built knowledge vaults. Install one and your AI has expert-level context immediately. same seed list to browse, same seed install <name> to install. All local, all free.

Can I use multiple vaults? Yes. same vault add work ~/work-notes && same vault default work. Search across all of them with same search --all "your query" or via the search_across_vaults MCP tool.


Community

Discord · GitHub Discussions · Report a Bug

Support

Buy me a coffee · GitHub Sponsors

Built with

Go · SQLite + sqlite-vec · Ollama / OpenAI

License

Source available under BSL 1.1. Free for personal, educational, hobby, research, and evaluation use. Converts to Apache 2.0 on 2030-02-02. See LICENSE.


Related Servers