OpenExp

Q-learning memory for Claude Code. Persistent memory that learns which context helps you get work done. Memories that lead to productive sessions (commits, PRs, tests) earn higher retrieval rank automatically. 16 MCP tools, hybrid BM25 + vector + Q-value scoring, local-first with Qdrant + FastEmbed.

OpenExp

Skills tell your AI how. OpenExp teaches it what works.
Outcome-based learning for AI agents. Q-learning memory that gets smarter with every session.

Tests License: MIT Python 3.11+ arXiv Made for Claude Code

Quick Start · How It Works · MCP Tools · Configuration · Architecture · Contributing


You wrote a skill: "how to work with CRM." Your agent follows it perfectly. But it doesn't know that approach A closed deals and approach B didn't. Tomorrow it'll do the same thing as yesterday — even if yesterday didn't work.

Skills say how. OpenExp teaches what works.

Every outcome — commit, closed deal, resolved ticket — feeds back as a reward signal. Memories that led to results get higher Q-values and surface first next time. Noise sinks.

Example: sales agent

Your agent sent 200 emails this month. Which formulations got replies? Which approaches closed deals? Skills don't know — there's no feedback loop.

# .openexp.yaml in your sales project
experience: sales
1. Define your pipeline:  lead → contacted → qualified → proposal → won
2. Work normally — Claude remembers client preferences, deal context, pricing
3. Deal closes → all memories tagged with that client get rewarded
4. Next similar deal → the insights that led to the close surface first

After a month, your agent "knows" not just how to write emails — but which emails lead to results.

The Problem

Skills and CLAUDE.md solve the "agent doesn't remember" problem. But they're static instructions — written once, never learning from outcomes. Your agent follows the playbook perfectly, but doesn't know which plays actually work.

Existing memory tools (Mem0, Zep, LangMem) add storage — but every memory is equally important. A two-month-old note about a deleted feature has the same weight as yesterday's critical architecture decision.

The missing piece: there's no learning. No feedback loop from outcomes to retrieval quality.

The Solution

OpenExp adds a closed-loop learning system:

Session starts → recall memories (ranked by Q-value)
    ↓
Agent works → observations captured automatically
    ↓
Session ends → productive? (commits, PRs, closed deals, resolved tickets)
    ↓
    YES → reward recalled memories (Q-values go up)
    NO  → penalize them (Q-values go down)
    ↓
Next session → better memories surface first

Outcome-Based Rewards

Beyond session-level heuristics, OpenExp supports outcome-based rewards from real business events. When a CRM deal moves from "negotiation" to "won", the memories tagged with that client get rewarded — even if the deal took weeks to close.

add_memory(content="Acme prefers Google stack", client_id="comp-acme")
    ↓
... weeks of work ...
    ↓
CRM: Acme deal moves negotiation → won
    ↓
resolve_outcomes → finds memories tagged comp-acme → reward +0.8

After a few sessions, OpenExp learns what context actually helps you get work done.

Why OpenExp?

FeatureOpenExpMem0Zep/GraphitiLangMem
Learns from outcomesYes — Q-learning from real business resultsNoNoNo
Process-awareDefine pipeline stages with reward signalsNoNoNo
Memory type filteringReward only decisions/insights, not noiseNoNoNo
Outcome-based rewardsCRM deal closes → tagged memories get rewardedNoNoNo
Claude Code nativeZero-config hooks, works out of the boxRequires integrationRequires integrationRequires integration
Local-firstQdrant + FastEmbed, no cloud, no API key for coreCloud APICloud or self-hostedCloud API
Hybrid retrievalBM25 + vector + recency + importance + Q-value (5 signals)Vector onlyGraph + vectorVector only
PrivacyAll data stays on your machineData sent to cloudDepends on setupData sent to cloud

The key difference: skills say how. Memory tools store. OpenExp learns what works — from real outcomes.

Quick Start

git clone https://github.com/anthroos/openexp.git
cd openexp
./setup.sh

That's it. Open Claude Code in any project — it now has memory.

[!TIP] No API key needed for core functionality. Embeddings run locally via FastEmbed. An Anthropic API key is optional — it enables auto-enrichment (type classification, tags, validity windows) but everything works great without it.

Prerequisites: Python 3.11+, Docker, jq

What You'll See

When you open Claude Code after a few sessions:

# OpenExp Memory (Q-value ranked)
Query: my-project | Monday 2026-03-22

## Relevant Context
[sim=0.82 q=0.73] Fixed auth bug by adding token refresh logic in api/auth.py
[sim=0.76 q=0.65] Project uses FastAPI + PostgreSQL, deployed on Railway
[sim=0.71 q=0.58] User prefers pytest with fixtures, not unittest

q=0.73 means this memory consistently leads to productive sessions. q=0.31 means it's been recalled but didn't help — it'll rank lower next time.

How It Works

Three hooks integrate with Claude Code automatically:

HookWhenWhat
SessionStartSession opensSearches Qdrant for relevant memories, injects top results as context
UserPromptSubmitEvery messageLightweight recall — adds relevant memories to each prompt
PostToolUseAfter Write/Edit/BashCaptures what Claude does as observations (JSONL)
SessionEndSession closesGenerates summary, triggers ingest + reward (async)

The MCP server provides 16 tools for memory operations, introspection, and calibration.

The Learning Loop

┌──────────────────────────────────────────────────────────────┐
│                                                              │
│   ┌─────────┐    search     ┌────────┐    inject    ┌─────┐ │
│   │ Qdrant  │──────────────→│ Scorer │────────────→│ LLM │ │
│   │ (384d)  │               │        │              │     │ │
│   └────┬────┘               └────────┘              └──┬──┘ │
│        │                    BM25 10%                    │    │
│        │                    Vector 30%                  │    │
│   Q-values                  Recency 15%            observations
│   updated                   Importance 15%             │    │
│        │                    Q-value 30%                 │    │
│        │                                               │    │
│   ┌────┴────┐   reward    ┌──────────┐   ingest   ┌───┴──┐ │
│   │ Q-Cache │←────────────│ Reward   │←───────────│ JSONL│ │
│   │  (LRU)  │             │ Tracker  │            │ obs  │ │
│   └─────────┘             └──────────┘            └──────┘ │
│                                                              │
└──────────────────────────────────────────────────────────────┘

Q-Learning Details

Every memory has a Q-value (starts at 0.0 — earn value from zero). Three layers capture different aspects:

LayerWeightMeasures
action50%Did recalling this help get work done?
hypothesis20%Was the information accurate?
fit30%Was it relevant to the context?

Update rule:

Q_new = clamp(Q_old + α × reward, floor, ceiling)

α = 0.25 (learning rate)
reward ∈ [-1.0, 1.0] (productivity signal)
floor = -0.5, ceiling = 1.0

Retrieval scoring combines five signals:

score = 0.30 × vector_similarity    # semantic match
      + 0.10 × bm25_score           # keyword match
      + 0.15 × recency              # exponential decay (90-day half-life)
      + 0.15 × importance           # type-weighted metadata
      + 0.30 × q_value              # learned quality

With 10% epsilon-greedy exploration — occasionally surfaces low-Q memories to give them another chance.

MCP Tools

Core — memory operations:

ToolDescription
search_memoryHybrid search: BM25 + vector + Q-value reranking
add_memoryStore memory with auto-enrichment (type, tags, validity). Supports client_id for entity tagging
log_predictionTrack a prediction for later outcome resolution
log_outcomeResolve prediction with reward → updates Q-values
get_agent_contextFull context: memories + pending predictions
resolve_outcomesRun outcome resolvers (CRM stage changes → targeted rewards)
reflectReview recent memories for patterns
memory_statsQ-cache size, prediction accuracy stats
reload_q_cacheHot-reload Q-values from disk

Introspection — understand why memories rank the way they do:

ToolDescription
experience_infoActive experience config (weights, resolvers, boosts)
experience_top_memoriesTop or bottom N memories by Q-value
experience_insightsReward distribution, learning velocity, valuable memory types
calibrate_experience_qManually set Q-value for a memory with reason
memory_reward_historyFull reward trail: Q-value changes, contexts (L2), cold storage (L3)
reward_detailComplete L3 cold storage record for a reward event
explain_qHuman-readable LLM explanation of why a memory has its Q-value (L4)

CLI

# Search memories
openexp search -q "authentication flow" -n 5

# Ingest observations into Qdrant
openexp ingest

# Preview what would be ingested (dry run)
openexp ingest --dry-run

# Run outcome resolvers (CRM stage changes → rewards)
openexp resolve

# Show Q-cache statistics
openexp stats

# Memory compaction (merge similar memories)
openexp compact --dry-run

# Manage experiences
openexp experience list
openexp experience show sales
openexp experience create        # interactive wizard

# Visualization
openexp viz --replay latest      # session replay
openexp viz --demo               # demo dashboard

Configuration

All settings via environment variables (.env):

VariableDefaultDescription
QDRANT_HOSTlocalhostQdrant server host
QDRANT_PORT6333Qdrant server port
QDRANT_API_KEY(none)Optional: Qdrant auth (also passed to Docker)
OPENEXP_COLLECTIONopenexp_memoriesQdrant collection name
OPENEXP_DATA_DIR~/.openexp/dataQ-cache, predictions, retrieval logs
OPENEXP_OBSERVATIONS_DIR~/.openexp/observationsWhere hooks write observations
OPENEXP_SESSIONS_DIR~/.openexp/sessionsSession summary files
OPENEXP_EMBEDDING_MODELBAAI/bge-small-en-v1.5Embedding model (local, free)
OPENEXP_EMBEDDING_DIM384Embedding dimensions
OPENEXP_INGEST_BATCH_SIZE50Batch size for ingestion
OPENEXP_OUTCOME_RESOLVERS(none)Outcome resolvers (format: module:Class)
OPENEXP_CRM_DIR(none)CRM directory for CRMCSVResolver
ANTHROPIC_API_KEY(none)Optional: enables LLM-based enrichment
OPENEXP_ENRICHMENT_MODELclaude-haiku-4-5-20251001Model for auto-enrichment

Anthropic API key is optional. Without it, memories get default metadata. With it, each memory is automatically classified (type, importance, tags, validity window).

Architecture

openexp/
├── core/                       # Q-learning memory engine
│   ├── q_value.py              # Q-learning: QCache, QValueUpdater, QValueScorer
│   ├── direct_search.py        # FastEmbed (384d) + Qdrant vector search
│   ├── hybrid_search.py        # BM25 keyword + vector + Q-value hybrid scoring
│   ├── scoring.py              # Composite relevance: similarity × recency × importance
│   ├── lifecycle.py            # 8-state memory lifecycle (active→confirmed→archived→...)
│   ├── experience.py           # Per-domain Q-value contexts (default, sales, dealflow)
│   ├── enrichment.py           # Auto-metadata extraction (LLM or defaults)
│   ├── explanation.py          # L4: LLM-generated reward explanations
│   ├── reward_log.py           # L3: cold storage of reward events
│   ├── compaction.py           # Memory merging/clustering
│   ├── v7_extensions.py        # Lifecycle filter + hybrid scoring integration
│   └── config.py               # Environment-based configuration
│
├── ingest/                     # Observation → Qdrant pipeline
│   ├── observation.py          # JSONL observations → embeddings → Qdrant
│   ├── session_summary.py      # Session .md files → memory objects
│   ├── reward.py               # Session productivity → reward signal
│   ├── retrieval_log.py        # Closed-loop: which memories were recalled
│   ├── watermark.py            # Idempotent ingestion tracking
│   └── filters.py              # Filter trivial observations
│
├── resolvers/                  # Outcome resolvers (pluggable)
│   └── crm_csv.py              # CRM CSV stage transition → reward events
│
├── data/experiences/           # Shipped experience configs
│   ├── default.yaml            # Software engineering
│   ├── sales.yaml              # Sales & outreach
│   └── dealflow.yaml           # Deal pipeline
│
├── outcome.py                  # Outcome resolution framework
│
├── hooks/                      # Claude Code integration
│   ├── session-start.sh        # Inject Q-ranked memories at startup
│   ├── user-prompt-recall.sh   # Per-message context recall
│   ├── post-tool-use.sh        # Capture observations from tool calls
│   └── session-end.sh          # Summary + ingest + reward (closes the loop)
│
├── mcp_server.py               # MCP STDIO server (16 tools, JSON-RPC 2.0)
├── reward_tracker.py           # Prediction → outcome → Q-value updates
├── viz.py                      # Visualization + session replay
└── cli.py                      # CLI: search, ingest, stats, viz, compact, experience

Memory Lifecycle

Memories move through 8 states to prevent stale context:

active ──→ confirmed ──→ outdated ──→ archived ──→ deleted
  │            │                          ↑
  ├──→ contradicted ──────────────────────┘
  ├──→ merged
  └──→ superseded

Only active and confirmed memories are returned in searches. Status weights affect scoring: confirmed=1.2×, active=1.0×, outdated=0.5×, archived=0.3×.

Data Flow

PostToolUse hook                                  SessionStart hook
      │                                                 ↑
      ↓                                                 │
~/.openexp/observations/*.jsonl                Qdrant search (top 10)
      │                                          + Q-value reranking
      ↓                                                 ↑
SessionEnd hook ──→ summary .md                         │
      │                                                 │
      ↓ (async)                                         │
openexp ingest ──→ FastEmbed ──→ Qdrant ─────────────────┘
      │                            ↑
      ↓                            │
Q-Cache (q_cache.json) ←── reward signal ←── session productivity

Technical Details

ComponentChoiceWhy
EmbeddingsFastEmbed (BAAI/bge-small-en-v1.5)Local, free, no API key, 384 dimensions
Vector DBQdrantFast ANN search, payload filtering, Docker-ready
Q-CacheIn-memory LRU (100K entries)Fast lookup, delta-based persistence for concurrent sessions
TransportMCP STDIO (JSON-RPC 2.0)Native Claude Code integration
HooksBash scriptsMinimal dependencies, shell-level integration

Troubleshooting

Docker / Qdrant won't start:

# Check Docker is running
docker info

# Check Qdrant container
docker ps -a | grep openexp-qdrant
docker logs openexp-qdrant

Hooks not firing:

# Verify hooks are registered
cat ~/.claude/settings.local.json | jq '.hooks'

# Re-run setup to fix registration
./setup.sh

No memories appearing: Memories need to be ingested first. After a few Claude Code sessions:

openexp ingest --dry-run   # preview what will be ingested
openexp ingest             # ingest into Qdrant
openexp stats              # check Q-cache state

Experiences — Define Your Process

Not everyone writes code. An Experience defines what "productive" means for your workflow, including pipeline stages and which memory types matter.

ExperienceProcessTop Signals
defaultbacklog → in_progress → review → merged → deployedcommits, PRs, tests
saleslead → contacted → qualified → proposal → negotiation → wondecisions, emails, follow-ups
dealflowlead → discovery → nda → proposal → negotiation → invoice → paidproposals, invoices, payments

Switch with one env var:

export OPENEXP_EXPERIENCE=dealflow

Each experience also controls which memory types get rewarded — sales rewards decisions and insights, not raw tool actions. This means the system learns faster because it focuses on the signal, not the noise.

Create your own with the interactive wizard:

openexp experience create
# Pick a process type (dev/sales/support/content)
# Customize stages, signal weights, memory type filters

See the Experiences Guide for full details.

Documentation

Detailed docs are available in the docs/ directory:

Contributing

This project is in early stages. See CONTRIBUTING.md for setup and workflow.

Key areas where help is welcome:

  • New experiences — domain-specific reward profiles (DevOps, writing, research, etc.)
  • Outcome resolvers — new integrations beyond CRM (Jira, Linear, GitHub Issues)
  • Multi-project learning — sharing relevant context across projects
  • Benchmarks — measuring retrieval quality improvement over time
  • Automated lifecycle transitions — contradiction detection, staleness heuristics

Research

OpenExp implements value-driven memory retrieval inspired by MemRL, adapted for episodic memory in AI coding assistants.

Core insight: treating memory retrieval as a reinforcement learning problem — where the reward signal comes from real session outcomes — produces better context selection than similarity-only search.

Citation

If you use OpenExp in your research, please cite:

@article{pasichnyk2026yerkes,
  title={The Yerkes-Dodson Curve for AI Agents: Optimal Pressure in Multi-Agent Survival Games},
  author={Pasichnyk, Ivan},
  journal={arXiv preprint arXiv:2603.07360},
  year={2026},
  url={https://arxiv.org/abs/2603.07360}
}

License

MIT © Ivan Pasichnyk

Máy chủ liên quan