Claude Prompts MCP Server

A universal MCP server that loads prompts from an external JSON configuration file.

Claude Prompts MCP Server

Claude Prompts MCP Server Logo

npm version License: AGPL v3

An MCP workflow server.

Craft reusable prompts with validation and reasoning guidance.
Orchestrate agentic workflows with a composable operator syntax.
Export as native skills.

Quick Start · What You Get · Compose Workflows · Run Anywhere · Docs

Chain workflow with gate validation — prompt executes through hooks, gate catches missing field on first attempt then self-corrects

Chain + gate validation in action (haiku model) — gates catch errors and guide self-correction, even on the cheapest model

What your AI client gives you — and what this server adds

Your client already doesThis server adds
Run a promptCompose prompts with validation, reasoning guidance, and formatting in one expression
Single-shot skillsMulti-step workflows that thread context between steps
Execute subagentsHand off mid-chain steps to agents with full workflow context
Client-native skill formatAuthor once as YAML, export to any client with skills:export
Manual prompt writingVersioned templates with hot-reload, rollback, and history
Trust the outputValidate output between steps — self-evaluation and shell commands

Quick Start

Claude Code (Recommended)

# Add marketplace (first time only)
/plugin marketplace add minipuft/minipuft-plugins

# Install
/plugin install claude-prompts@minipuft

# Try it
>>tech_evaluation_chain library:'zod' context:'API validation'
Development setup

Load plugin from local source for development:

git clone https://github.com/minipuft/claude-prompts ~/Applications/claude-prompts
cd ~/Applications/claude-prompts/server && npm install && npm run build
claude --plugin-dir ~/Applications/claude-prompts

Edit hooks/prompts → restart Claude Code. Edit TypeScript → rebuild first.

Custom prompts: Use --init=~/my-prompts to create a workspace with starter templates you own. Prompts created via resource_manager are saved to your active resources directory. See Custom Resources.


Claude Desktop

Option A: GitHub Release (recommended)

  1. Download claude-prompts-{version}.mcpb from Releases
  2. Drag into Claude Desktop Settings → MCP Servers
  3. Done

The .mcpb bundle is self-contained (~5MB) — no npm required.

Option B: NPX (auto-updates)

Add to your config file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "claude-prompts": {
      "command": "npx",
      "args": ["-y", "claude-prompts@latest", "--client", "claude-code"]
    }
  }
}

Restart Claude Desktop and test: >>research_chain topic:'remote team policies'


VS Code / Copilot

Install in VS Code

Click the badge above for one-click install, or add manually to .vscode/mcp.json:

{
  "servers": {
    "claude-prompts": {
      "command": "npx",
      "args": ["-y", "claude-prompts@latest"]
    }
  }
}
Cursor

Install in Cursor

Click the badge above for one-click install, or add manually to ~/.cursor/mcp.json:

{
  "mcpServers": {
    "claude-prompts": {
      "command": "npx",
      "args": ["-y", "claude-prompts@latest", "--client=cursor"]
    }
  }
}
OpenCode

Option A: Plugin install (recommended — includes hooks)

npm install -g opencode-prompts
opencode-prompts install

The installer configures hooks (chain tracking, gate enforcement, state preservation), plugin registration, and MCP server. See opencode-prompts for what hooks provide.

Option B: MCP server only (manual config, no hooks)

Add to ~/.config/opencode/opencode.json:

{
  "mcp": {
    "claude-prompts": {
      "type": "local",
      "command": [
        "npx",
        "-y",
        "claude-prompts@latest",
        "--transport=stdio",
        "--client=opencode"
      ]
    }
  }
}

You'll have the three MCP tools but no chain tracking, gate enforcement, or state preservation across compactions. To load your own prompts, add an environment key — see Custom Resources.

Gemini CLI

Option A: Extension install (recommended — includes hooks)

# Enable hooks (first time only — skip if already enabled)
echo '{"hooks": {"enabled": true}}' > ~/.gemini/settings.json

# Install extension
gemini extensions install https://github.com/minipuft/gemini-prompts

The extension registers the MCP server and adds hooks for >> syntax detection, chain tracking, and gate reminders. See gemini-prompts for what hooks provide.

Option B: MCP server only (manual config, no hooks)

Add to ~/.gemini/settings.json:

{
  "mcpServers": {
    "claude-prompts": {
      "command": "npx",
      "args": ["-y", "claude-prompts@latest", "--client=gemini"]
    }
  }
}

You'll have the three MCP tools but no >> syntax detection, chain tracking, or gate reminders. To load your own prompts, add an env key — see Custom Resources.

Other Clients (Codex, Windsurf, Zed)

Add to your MCP config file with a --client preset for deterministic handoff guidance:

ClientConfig LocationRecommended --client
Codex~/.codex/config.tomlcodex
Windsurf~/.codeium/windsurf/mcp_config.jsoncursor (experimental)
Zed~/.config/zed/settings.jsonmcp keyunknown

JSON-based configs (Windsurf/Zed):

{
  "mcpServers": {
    "claude-prompts": {
      "command": "npx",
      "args": ["-y", "claude-prompts@latest", "--client=cursor"]
    }
  }
}

Codex (~/.codex/config.toml):

[mcp_servers.claude_prompts]
command = "npx"
args = ["-y", "claude-prompts@latest", "--client=codex"]

Supported presets: claude-code, codex, gemini, opencode, cursor, unknown.

For complete per-client setup and limitations:

From Source (developers only)
git clone https://github.com/minipuft/claude-prompts.git
cd claude-prompts/server
npm install && npm run build && npm test

Point your MCP config to server/dist/index.js. The esbuild bundle is self-contained.

Transport options: --transport=stdio (default), --transport=streamable-http (HTTP clients).

Custom Resources

Use your own prompts, gates, methodologies, and styles. Two approaches depending on whether you want the bundled resources or not:

Option A: Own workspace (recommended for full control)

Create a workspace with starter templates, then point your MCP config to it:

npx -y claude-prompts@latest --init=~/my-prompts

This creates ~/my-prompts/resources/ with starter prompts you own. Set MCP_WORKSPACE or MCP_RESOURCES_PATH to use it. Prompts created via resource_manager are saved here. Your AI can update them through MCP — no manual editing needed.

Option B: Plugin install (bundled resources + hooks)

Plugin installs (Claude Code, OpenCode, Gemini) set MCP_WORKSPACE automatically and ship the bundled 90+ prompts, gates, and methodologies. Prompts created via resource_manager are saved to the plugin's resources directory.

[!IMPORTANT] MCP_RESOURCES_PATH sets the base resources directory (replaces the package default). MCP_WORKSPACE enables overlay — custom resources in your workspace are loaded alongside bundled ones. Prompts, gates, methodologies, and styles with the same ID as bundled ones take priority.

Config examples per client

Claude Desktop / VS Code / Cursor (JSON with env):

{
  "mcpServers": {
    "claude-prompts": {
      "command": "npx",
      "args": ["-y", "claude-prompts@latest"],
      "env": {
        "MCP_RESOURCES_PATH": "/path/to/your/resources"
      }
    }
  }
}

OpenCode (JSON with environment):

{
  "mcp": {
    "claude-prompts": {
      "type": "local",
      "command": ["npx", "-y", "claude-prompts@latest", "--transport=stdio"],
      "environment": {
        "MCP_RESOURCES_PATH": "/path/to/your/resources"
      }
    }
  }
}

See CLI Configuration for all env vars including fine-grained path overrides (MCP_PROMPTS_PATH, MCP_GATES_PATH, etc.).


See the dashboard — system status overview
System status demo showing loaded prompts, gates, methodologies, and active configuration

Loaded resources, active configuration, and server health at a glance


What You Get

Four resource types you author, version, and compose into workflows.

See the catalog — listing all available prompts
Listing all available prompts across 11 categories using the resource_manager tool

90 prompts across 11 categories — all hot-reloadable and versionable

Prompt Templates

Versioned YAML with hot-reload. Edit a template, test it immediately — or ask your AI to update it through MCP.

>>code_review target:'src/auth/' language:'typescript'

Validation Rules (Gates)

Criteria the AI checks its own output against. Blocking or advisory.

:: 'no false positives' :: 'cite sources with links'

Failed checks can retry automatically or pause for your decision.

[!TIP] Define your own checks. See the Gates Guide for blocking vs advisory rules, retry behavior, and shell verification.

Reasoning Guidance (Methodologies)

Frameworks that shape how the AI thinks through a problem — not just what it outputs. 6 built-in, or create your own.

@CAGEERF    # Context → Analysis → Goals → Execution → Evaluation → Refinement
@ReACT      # Reason → Act → Observe loops
@5W1H       # Who, What, Where, When, Why, How

[!TIP] Create your own framework. See the Methodologies Guide for built-in frameworks and custom authoring.

Styles

Response formatting and tone.

#analytical    # Structured, evidence-based output
#concise       # Brief, action-focused

All resources are hot-reloadable, versioned with rollback history, and managed through the resource_manager tool.

[!TIP] Ready to build your own? Start with the Prompt Authoring Tutorial.


Compose Workflows

The operator syntax wires resources together — chain steps, add validation inline, hand off steps to agents.

>>review target:'src/auth/' @CAGEERF :: 'no false positives'
  --> security_scan :: verify:"npm test"
  --> recommendations :: 'actionable, with code'
  ==> implementation
See the chain — phases completing back-to-back
Chain phases 3-4 executing back-to-back, compounding reasoning across steps before rendering final output

Phases compound reasoning across steps — each step builds on validated output from the previous one

See the output — tech evaluation chain with context7 research
Tech evaluation chain researching Zod via context7, producing a scored assessment table with security, performance, DX, integration, and ecosystem ratings

Context7 fetches live library docs mid-chain — final output is a structured assessment with sources

What happened:

  1. Loaded the review template with arguments
  2. Injected CAGEERF reasoning guidance
  3. Added a validation rule (AI self-evaluates against it)
  4. Chained output to the next step
  5. Ran a shell command for ground-truth validation
  6. Handed the final step off to a client-native subagent

[!TIP] Chains support conditional branching, context threading, and agent handoffs. Chains Lifecycle · MCP Tools Reference

Verification Loops

Ground-truth validation via shell commands — the AI keeps iterating until tests pass:

>>implement-feature :: verify:"npm test" loop:true

Implements, runs the test, reads failures, fixes, retries. Spawns a fresh context after repeated failures to avoid context rot.

PresetTriesTimeoutUse Case
:fast130sQuick check
:full55 minCI validation
:extended1010 minLarge test suites

[!TIP] Autonomous test-fix cycles. See Ralph Loops for presets, timeout configuration, and context-rot prevention.

Judge Mode

Let the AI pick the right resources for the task:

%judge Help me refactor this authentication module

Analyzes available templates, reasoning frameworks, validation rules, and styles — applies the best combination automatically.

[!TIP] How judge mode selects resources. See Judge Mode Guide for scoring, overrides, and preview with %judge.


Run Anywhere

Author workflows as YAML templates. Export as native skills to your client.

# skills-sync.yaml — choose what to export
registrations:
  claude-code:
    user:
      - prompt:development/review
      - prompt:development/validate_work
npm run skills:export

The review prompt becomes a /review Claude Code skill. validate_work becomes /validate_work. Same source, native experience — no MCP call required at runtime.

Compiles to Claude Code skills, Cursor rules, OpenCode commands, and more. npm run skills:diff flags when exports drift from source.

See the export — dry-run compile + skill preview
Skills export dry-run compiling prompts to native skill files, then bat preview of the generated review skill with phases, gates, and arguments

Dry-run compiles YAML templates into native client skills — review before writing

[!TIP] The Skills Sync Guide covers configuration, supported clients, and drift detection.


With Hooks

Well-composed prompts carry their own structure. Hooks keep the experience consistent across models and long sessions.

What hooks do

Route operator syntax to the right tool automatically. Track workflow progress across steps and long sessions. Enforce validation rules and step handoffs between agents.

BehaviorWhat happens
Prompt routing>>analyze in conversation → correct MCP tool call
Chain continuityInjects step progress and continuation between steps
Validation trackingTracks pass/fail verdicts across chain steps
Agent handoffsRoutes ==> steps to client-native subagents
Session persistencePreserves workflow state through context compaction

Hooks ship with the plugin install. Available for Claude Code (full), OpenCode (full), Gemini CLI (partial). Other clients: MCP tools only.

hooks/README.md


Syntax Reference
SymbolNameWhat It DoesExample
>>PromptExecute template>>code_review
-->ChainPipe to next stepstep1 --> step2
==>HandoffRoute step to agentstep1 ==> agent_step
*RepeatRun prompt N times>>brainstorm * 5
@FrameworkInject reasoning guidance@CAGEERF
::GateAdd validation criteria:: 'cite sources'
%ModifierToggle behavior%clean, %judge
#StyleApply formatting#analytical

Modifiers:

  • %clean — No framework/gate injection
  • %lean — Gates only, skip framework
  • %guided — Force framework injection
  • %judge — AI selects best resources

MCP Tools Reference for full command documentation.

The Three Tools
ToolPurpose
prompt_engineExecute prompts with frameworks and validation
resource_managerCreate, update, version, and export resources
system_controlStatus, analytics, framework switching
prompt_engine(command:"@CAGEERF >>analysis topic:'AI safety'")
resource_manager(resource_type:"prompt", action:"list")
system_control(action:"status")

How It Works

%%{init: {'theme': 'neutral', 'themeVariables': {'background':'#0b1224','primaryColor':'#e2e8f0','primaryBorderColor':'#1f2937','primaryTextColor':'#0f172a','lineColor':'#94a3b8','fontFamily':'"DM Sans","Segoe UI",sans-serif','fontSize':'14px','edgeLabelBackground':'#0b1224'}}}%%
flowchart TB
    classDef actor fill:#0f172a,stroke:#cbd5e1,stroke-width:1.5px,color:#f8fafc;
    classDef server fill:#111827,stroke:#fbbf24,stroke-width:1.8px,color:#f8fafc;
    classDef process fill:#e2e8f0,stroke:#1f2937,stroke-width:1.6px,color:#0f172a;
    classDef client fill:#f4d0ff,stroke:#a855f7,stroke-width:1.6px,color:#2e1065;
    classDef clientbg fill:#1a0a24,stroke:#a855f7,stroke-width:1.8px,color:#f8fafc;
    classDef decision fill:#fef3c7,stroke:#f59e0b,stroke-width:1.6px,color:#78350f;

    linkStyle default stroke:#94a3b8,stroke-width:2px

    User["1. User sends command"]:::actor
    Example[">>analyze @CAGEERF :: 'cite sources'"]:::actor
    User --> Example --> Parse

    subgraph Server["MCP Server"]
        direction TB
        Parse["2. Parse operators"]:::process
        Inject["3. Inject framework + gates"]:::process
        Render["4. Render prompt"]:::process
        Decide{"6. Route verdict"}:::decision
        Parse --> Inject --> Render
    end
    Server:::server

    subgraph Client["Claude (Client)"]
        direction TB
        Execute["5. Run prompt + check gates"]:::client
    end
    Client:::clientbg

    Render -->|"Prompt with gate criteria"| Execute
    Execute -->|"Verdict + output"| Decide

    Decide -->|"PASS → render next step"| Render
    Decide -->|"FAIL → render retry prompt"| Render
    Decide -->|"Done"| Result["7. Return to user"]:::actor

Command with operators → server parses and injects resources → client executes and self-evaluates → route: next step (pass), retry (fail), or return result (done).


Documentation

I want to...Go here
Build my first promptPrompt Authoring Tutorial
Chain multi-step workflowsChains Lifecycle
Add validation to workflowsGates Guide
Use or create reasoning frameworksMethodologies Guide
Use autonomous verification loopsRalph Loops
Configure per-client MCP installs and --client presetsClient Integration Guide
Compare client profile mapping and limitationsClient Capabilities Reference
Export skills to other clientsSkills Sync
Configure the serverCLI & Configuration
Let the AI pick resources automaticallyJudge Mode Guide
Look up MCP tool parametersMCP Tools Reference
Look up prompt YAML fieldsPrompt YAML Schema
Understand the architectureArchitecture Overview
Fix common issuesTroubleshooting

Contributing

cd server
npm install
npm run build        # esbuild bundles to dist/index.js
npm test             # Run test suite
npm run validate:all # Full CI validation

The build produces a self-contained bundle. server/dist/ is gitignored — CI builds fresh from source.

See CONTRIBUTING.md for workflow details.


License

AGPL-3.0

Servidores relacionados