MCP Memory Gateway (rlhf-feedback-loop)

Local-first RLHF feedback loop for AI agents — capture preference signals, promote memories, block repeated mistakes, export DPO/KTO training pairs

MCP Memory Gateway

CI Self-Healing npm License: MIT Node Sponsor Buy Me a Coffee Pro Pack

Local-first AI reliability system for coding agents. Keeps one sharp agent on task: persist decisions, surface reliability rules, and inject relevant history without adding orchestration or subagent handoff overhead.

Honest disclaimer: This is a context injection system, not RLHF. LLM weights are not updated by thumbs-up/down signals. What actually happens: feedback is validated, promoted to searchable memory, and recalled at session start so agents have project history they'd otherwise lose. That's genuinely valuable — but it's context engineering, not reinforcement learning.

Works with any MCP-compatible agent: Claude, Codex, Gemini, Amp, Cursor, OpenCode.

Verification evidence for shipped features lives in docs/VERIFICATION_EVIDENCE.md.

Repo-local operator guides:

Continuity tools help you resume work. MCP Memory Gateway keeps the resumed session sharper: recall, reliability rules, pre-action gates, and verification layered on top of that continuity workflow without another planner or swarm.

Claude Workflow Hardening

If you are selling or deploying Claude-first delivery, the cleanest commercial wedge is not "AI employee" hype. It is a Workflow Hardening Sprint for one workflow with enough memory, gates, and proof to ship safely.

Use that motion when a buyer already has:

  • one workflow owner
  • one repeated failure pattern or rollout blocker
  • one buyer who needs proof before broader rollout

That maps cleanly to three offers:

  • Workflow Hardening Sprint for one production workflow with business value
  • code modernization guardrails for long-running migration and refactor sessions
  • hosted Pro at $49 one-time when the team only needs synced memory, gates, and usage analytics

Use these assets in sales and partner conversations:

Claude Desktop Extensions

This repo already ships a Claude Desktop extension lane:

  • Claude metadata: .claude-plugin/plugin.json
  • Claude marketplace metadata: .claude-plugin/marketplace.json
  • Claude extension install and support guide: .claude-plugin/README.md
  • Claude Desktop bundle builder: npm run build:claude-mcpb
  • Claude Desktop bundle launcher: .claude-plugin/bundle/server/index.js
  • Claude Desktop bundle icon: .claude-plugin/bundle/icon.png
  • Internal submission packet: docs/CLAUDE_DESKTOP_EXTENSION.md

Install locally today with:

claude mcp add rlhf -- npx -y mcp-memory-gateway serve

Build a submission-ready .mcpb locally with:

npm run build:claude-mcpb

Treat Anthropic directory inclusion as a discoverability and trust lane, not as revenue proof or partner proof.

Cursor Marketplace

This repo now ships a submission-ready Cursor plugin bundle:

  • Root marketplace manifest: .cursor-plugin/marketplace.json
  • Plugin directory: plugins/cursor-marketplace/
  • Plugin MCP config: plugins/cursor-marketplace/.mcp.json

That package keeps the Cursor review surface intentionally small: one MCP server bundle for the Agentic Feedback Studio reliability system, Veto Layer guardrails, DPO export, and Thompson Sampling feedback loop. Until the public listing is approved, Cursor users can still install locally with npx mcp-memory-gateway init.

Visual Demo: Experience the Magic

Stop imagining and see the MCP Memory Gateway in action. This is the difference between an agent that repeats mistakes and one that actually improves.

1. The "Repeat Mistake" Cycle (Without Gateway)

Agent: I'll fix the bug and push directly to main.
User: No, you forgot to check the PR review thread again!
Agent: Sorry, I'll remember next time. (It won't).

2. The "Agentic Memory" Cycle (With Gateway)

Watch how the Pre-Action Gates and Reasoning Traces physically block the failure:

User: Fix the bug and push.
Agent: I'll apply the fix... [Applying Edit]
Agent: Now I'll push to main... [Executing: git push]

🛑 GATE BLOCKED: push-without-thread-check
──────────────────────────────────────────────────
Reason    : Rule promoted from 3+ previous failures.
Condition : No 'gh pr view' or thread check detected in current session.
Action    : Blocked. Please check review threads first.
──────────────────────────────────────────────────

Agent: My apologies. I see that I am blocked because I haven't checked 
the PR threads. I'll do that now... [Executing: gh pr view]

Success! Agent finds a blocker in the thread, fixes it, and then pushes.

3. Deep Troubleshooting with Reasoning Traces

Every captured signal now includes a Reasoning Trace, making "black-box" failures transparent:

# Capture feedback with the new --reasoning flag
npx mcp-memory-gateway capture --feedback=down \
  --context="Agent skipped unit tests" \
  --reasoning="The agent assumed the change was too small to break anything, but it regressed the auth flow." \
  --tags="testing,regression"

Now, when the agent starts its next session, it doesn't just see "Don't skip tests." It sees the logic that led to the failure, preventing the same cognitive trap.

  1. Capturecapture_feedback MCP tool accepts signals with structured context (vague "thumbs down" is rejected)
  2. Validate — Rubric engine gates promotion — requires specific failure descriptions, not vibes
  3. Remember — Promoted memories stored in JSONL + LanceDB vectors for semantic search
  4. Prevent — Repeated failures auto-generate prevention rules (the actual value — agents follow these when loaded)
  5. Gate — Pre-action blocking via PreToolUse hooks — physically prevents known mistakes before they happen
  6. Recallrecall tool injects relevant past context into current session (this is the mechanism that works)
  7. Export — DPO/KTO pairs for optional downstream fine-tuning (separate from runtime behavior)
  8. Bridge — JSONL file watcher auto-ingests signals from external sources (Amp plugins, hooks, scripts)

What Works vs. What Doesn't

✅ Actually works❌ Does not work
recall injects past context — agent reads and uses itThumbs up/down changing agent behavior mid-session
remember persists decisions across sessionsLLM weight updates from feedback signals
Prevention rules — followed when loaded at session startFeedback stats improving agent performance automatically
Pre-action gates — physically block known mistakes"Learning curve" implying the agent itself learns
Auto-promotion — 3+ failures become blocking rulesAgents self-correcting without context injection

Quick Start

# Recommended: essential profile (5 high-ROI tools)
claude mcp add rlhf -- npx -y mcp-memory-gateway serve
codex mcp add rlhf -- npx -y mcp-memory-gateway serve
amp mcp add rlhf -- npx -y mcp-memory-gateway serve
gemini mcp add rlhf "npx -y mcp-memory-gateway serve"

# Or auto-detect all installed platforms
npx mcp-memory-gateway init

# Auto-wire PreToolUse hooks (blocks known mistakes before they happen)
npx mcp-memory-gateway init --agent claude-code
npx mcp-memory-gateway init --agent codex
npx mcp-memory-gateway init --agent gemini

# Audit readiness before a long-running workflow
npx mcp-memory-gateway doctor

Profiles: Set RLHF_MCP_PROFILE=essential for the lean 5-tool setup (recommended), or leave unset for the full 12-tool pipeline. See MCP Tools for details.

Pair It With Continuity Tools

Project continuity and agent reliability are complementary, not interchangeable.

  • Use your editor, assistant, or resume workflow to regain context quickly.
  • Use MCP Memory Gateway as the reliability layer for recall, gates, and proof.

If an external tool can append structured JSONL entries with a source field, the built-in watcher can ingest them through the normal feedback pipeline:

{"source":"editor-brief","signal":"down","context":"Agent resumed without reading the migration notes","whatWentWrong":"Skipped the resume brief and edited the wrong table","whatToChange":"Read the project brief before schema changes","tags":["continuity","resume","database"]}
npx mcp-memory-gateway watch --source editor-brief

That routes the event through validation, memory promotion, vector indexing, and export eligibility without adding a second integration stack.

Guide: docs/guides/continuity-tools-integration.md

Pre-Action Gates

Gates are the enforcement layer. They physically block tool calls that match known failure patterns — no agent cooperation required.

Agent tries git push → PreToolUse hook fires → gates-engine checks rules → BLOCKED (no PR thread check)

How it works

  1. init --agent claude-code auto-wires a PreToolUse hook into your agent settings
  2. The hook pipes every Bash command through gates-engine.js
  3. Gates match tool calls against regex patterns and block/warn
  4. Auto-promotion: 3+ same-tag failures → auto-creates a warn gate. 5+ → upgrades to block.

Built-in gates

GateActionWhat it blocks
push-without-thread-checkblockgit push without checking PR review threads first
package-lock-resetblockgit checkout <branch> -- package-lock.json
force-pushblockgit push --force / -f
protected-branch-pushblockDirect push to develop/main/master
env-file-editwarnEditing .env files

Custom gates

Define your own in config/gates/custom.json:

{
  "version": 1,
  "gates": [
    {
      "id": "no-npm-audit-fix",
      "pattern": "npm audit fix --force",
      "action": "block",
      "message": "npm audit fix --force can break dependencies. Review manually."
    }
  ]
}

Gate satisfaction

Some gates have unless conditions. To satisfy a gate before pushing:

# Via MCP tool
satisfy_gate(gateId: "push-without-thread-check", evidence: "0/42 unresolved")

# Via CLI
node scripts/gate-satisfy.js --gate push-without-thread-check --evidence "0 unresolved"

Evidence expires after 5 minutes (configurable TTL).

Dashboard

npx mcp-memory-gateway dashboard
📊 RLHF Dashboard
══════════════════════════════════════════════
  Approval Rate    : 26% → 45% (7-day trend ↑)
  Total Signals    : 190 (15 positive, 43 negative)

🛡️ Gate Enforcement
  Active Gates     : 7 (4 manual, 3 auto-promoted)
  Actions Blocked  : 12 this week
  Actions Warned   : 8 this week
  Top Blocked      : push-without-thread-check (5×)

⚡ Prevention Impact
  Estimated Saves  : 3.2 hours
  Rules Active     : 5 prevention rules
  Last Promotion   : pr-review (2 days ago)

MCP Tools

Essential (high-ROI — start here)

These 5 tools deliver ~80% of the value. Use the essential profile for a lean setup:

RLHF_MCP_PROFILE=essential claude mcp add rlhf -- npx -y mcp-memory-gateway serve
ToolDescription
capture_feedbackAccept up/down signal + context, validate, promote to memory
recallVector-search past feedback and prevention rules for current task
prevention_rulesGenerate prevention rules from repeated mistakes
feedback_statsApproval rate, per-skill/tag breakdown, trend analysis
feedback_summaryHuman-readable recent feedback summary

Full pipeline (advanced)

These tools support fine-tuning workflows, context engineering, and audit trails. Use the default profile to enable all tools:

ToolDescriptionWhen you need it
export_dpo_pairsBuild DPO preference pairs from promoted memoriesFine-tuning a model on your feedback
export_databricks_bundleExport RLHF logs and proof artifacts as a Databricks-ready analytics bundleWarehousing local feedback, attribution, and proof data for Databricks / Genie Code analysis
construct_context_packBounded context pack from contextfsCustom retrieval for large projects
evaluate_context_packRecord context pack outcome (closes learning loop)Measuring retrieval quality
list_intentsAvailable action plan templatesPolicy-gated workflows
plan_intentGenerate execution plan with policy checkpointsPolicy-gated workflows
context_provenanceAudit trail of context decisionsDebugging retrieval decisions
satisfy_gateRecord evidence that a gate condition is metUnblocking gated actions (e.g., PR thread check)
gate_statsGate enforcement statistics (blocked/warned counts)Monitoring gate effectiveness
dashboardFull RLHF dashboard (approval rate, gates, prevention)Overview of system health
diagnose_failureCompile workflow, gate, approval, and MCP-tool constraints into a root-cause reportSystematic debugging for failed or suspect agent runs

CLI

npx mcp-memory-gateway init              # Scaffold .rlhf/ + configure MCP
npx mcp-memory-gateway init --agent X    # + auto-wire PreToolUse hooks (claude-code/codex/gemini)
npx mcp-memory-gateway init --wire-hooks # Wire hooks only (auto-detect agent)
npx mcp-memory-gateway serve             # Start MCP server (stdio) + watcher
npx mcp-memory-gateway doctor            # Audit runtime isolation, bootstrap context, and MCP permission tier
npx mcp-memory-gateway dashboard         # Full RLHF dashboard with gate stats
npx mcp-memory-gateway north-star        # North Star progress: proof-backed workflow runs
npx mcp-memory-gateway gate-stats        # Gate enforcement statistics
npx mcp-memory-gateway status            # Learning curve dashboard
npx mcp-memory-gateway watch             # Watch .rlhf/ for external signals
npx mcp-memory-gateway capture           # Capture feedback via CLI
npx mcp-memory-gateway stats             # Analytics + Revenue-at-Risk
npx mcp-memory-gateway rules             # Generate prevention rules
npx mcp-memory-gateway export-dpo        # Export DPO training pairs
npx mcp-memory-gateway export-databricks # Export Databricks-ready analytics bundle
npx mcp-memory-gateway risk              # Train/query boosted risk scorer
npx mcp-memory-gateway self-heal         # Run self-healing diagnostics

Hosted growth tracking

The landing page ships first-party telemetry plus optional GA4 and Google Search Console hooks.

export RLHF_PUBLIC_APP_ORIGIN='https://rlhf-feedback-loop-production.up.railway.app'
export RLHF_BILLING_API_BASE_URL='https://rlhf-feedback-loop-production.up.railway.app'
export RLHF_FEEDBACK_DIR='/data/feedback'
export RLHF_GA_MEASUREMENT_ID='G-XXXXXXXXXX'          # optional
export RLHF_GOOGLE_SITE_VERIFICATION='token-value'    # optional
  • Plausible stays on by default for lightweight page analytics.
  • GA4 is only injected when RLHF_GA_MEASUREMENT_ID is set.
  • Search Console verification meta is only injected when RLHF_GOOGLE_SITE_VERIFICATION is set.
  • Hosted deployments should set RLHF_FEEDBACK_DIR=/data/feedback (or another durable path) so telemetry, billing ledgers, and proof-backed workflow-run evidence survive restarts.
  • npx mcp-memory-gateway dashboard now shows whether traffic, SEO, funnel, and revenue instrumentation are actually configured and receiving events.

JSONL File Watcher

The serve command automatically starts a background watcher that monitors feedback-log.jsonl for entries written by external sources (Amp plugins, shell hooks, CI scripts). These entries are routed through the full captureFeedback() pipeline — validation, memory promotion, vector indexing, and DPO eligibility.

# Standalone watcher
npx mcp-memory-gateway watch --source amp-plugin-bridge

# Process pending entries once and exit
npx mcp-memory-gateway watch --once

External sources write entries with a source field:

{"signal":"positive","context":"Agent fixed bug on first try","source":"amp-plugin-bridge","tags":["amp-ui-bridge"]}

The watcher tracks its position via .rlhf/.watcher-offset for crash-safe, idempotent processing.

Architecture

Value tiers

TierComponentsImpact
Core (use now)capture_feedback + recall + prevention_rules + enforcement hooksCaptures mistakes, prevents repeats, constrains behavior
Gates (use now)Pre-action gates + auto-promotion + satisfy_gate + dashboardPhysically blocks known mistakes before they happen
Analytics (use now)feedback_stats + feedback_summary + learning curve dashboardMeasures whether the agent is actually improving
Fine-tuning (future)DPO/KTO export, Thompson Sampling, context packsInfrastructure for model fine-tuning — valuable when you have a training pipeline

~30% of the codebase delivers ~80% of the runtime value. The rest is forward-looking infrastructure for teams that export training data.

Pipeline

Six-phase pipeline: CaptureValidateRememberPreventGateExport

Context Engineering Architecture

Plugin Topology

Agent (Claude/Codex/Amp/Gemini)
  │
  ├── MCP tool call ──→ captureFeedback()
  ├── REST API ────────→ captureFeedback()
  ├── CLI ─────────────→ captureFeedback()
  └── External write ──→ JSONL ──→ Watcher ──→ captureFeedback()
                                        │
                                        ▼
                              ┌─────────────────┐
                              │  Full Pipeline   │
                              │  • Schema valid  │
                              │  • Rubric gate   │
                              │  • Memory promo  │
                              │  • Vector index  │
                              │  • Risk scoring  │
                              │  • RLAIF audit   │
                              │  • DPO eligible  │
                              └─────────────────┘

Agent Runner Contract

💎 Pro Pack — Production Context Engineering Configs

Curated configuration pack for teams that want a faster production setup without inventing their own guardrails from scratch.

What You GetDescription
Prevention Rules10 curated rules covering PR workflow, git hygiene, tool misuse, memory management
Thompson Sampling Presets4 pre-tuned profiles: Conservative, Exploratory, Balanced, Strict
Extended Constraints10 RLAIF self-audit constraints (vs 6 in free tier)
Hook TemplatesReady-to-install Stop, UserPromptSubmit, PostToolUse hooks
Reminder Templates8 production reminder templates with priority levels

Buy Pro ($49 one-time) →

Current pricing and traction policy: Commercial Truth

Support the Project

If MCP Memory Gateway saves you time, consider supporting development:

License

MIT. See LICENSE.

İlgili Sunucular