Claude Prompts MCP Server
A universal MCP server that loads prompts from an external JSON configuration file.
Claude Prompts MCP Server
An MCP workflow server.
Craft reusable prompts with validation and reasoning guidance.
Orchestrate agentic workflows with a composable operator syntax.
Export as native skills.
Quick Start · What You Get · Compose Workflows · Run Anywhere · Docs
Chain + gate validation in action (haiku model) — gates catch errors and guide self-correction, even on the cheapest model
What your AI client gives you — and what this server adds
| Your client already does | This server adds |
|---|---|
| Run a prompt | Compose prompts with validation, reasoning guidance, and formatting in one expression |
| Single-shot skills | Multi-step workflows that thread context between steps |
| Execute subagents | Hand off mid-chain steps to agents with full workflow context |
| Client-native skill format | Author once as YAML, export to any client with skills:export |
| Manual prompt writing | Versioned templates with hot-reload, rollback, and history |
| Trust the output | Validate output between steps — self-evaluation and shell commands |
Quick Start
Claude Code (Recommended)
# Add marketplace (first time only)
/plugin marketplace add minipuft/minipuft-plugins
# Install
/plugin install claude-prompts@minipuft
# Try it
>>tech_evaluation_chain library:'zod' context:'API validation'
Development setup
Load plugin from local source for development:
git clone https://github.com/minipuft/claude-prompts ~/Applications/claude-prompts
cd ~/Applications/claude-prompts/server && npm install && npm run build
claude --plugin-dir ~/Applications/claude-prompts
Edit hooks/prompts → restart Claude Code. Edit TypeScript → rebuild first.
Custom prompts: Use --init=~/my-prompts to create a workspace with starter templates you own. Prompts created via resource_manager are saved to your active resources directory. See Custom Resources.
Claude Desktop
Option A: GitHub Release (recommended)
- Download
claude-prompts-{version}.mcpbfrom Releases - Drag into Claude Desktop Settings → MCP Servers
- Done
The .mcpb bundle is self-contained (~5MB) — no npm required.
Option B: NPX (auto-updates)
Add to your config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest", "--client", "claude-code"]
}
}
}
Restart Claude Desktop and test: >>research_chain topic:'remote team policies'
VS Code / Copilot
Click the badge above for one-click install, or add manually to .vscode/mcp.json:
{
"servers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"]
}
}
}
Cursor
Click the badge above for one-click install, or add manually to ~/.cursor/mcp.json:
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest", "--client=cursor"]
}
}
}
OpenCode
Option A: Plugin install (recommended — includes hooks)
npm install -g opencode-prompts
opencode-prompts install
The installer configures hooks (chain tracking, gate enforcement, state preservation), plugin registration, and MCP server. See opencode-prompts for what hooks provide.
Option B: MCP server only (manual config, no hooks)
Add to ~/.config/opencode/opencode.json:
{
"mcp": {
"claude-prompts": {
"type": "local",
"command": [
"npx",
"-y",
"claude-prompts@latest",
"--transport=stdio",
"--client=opencode"
]
}
}
}
You'll have the three MCP tools but no chain tracking, gate enforcement, or state preservation across compactions. To load your own prompts, add an environment key — see Custom Resources.
Gemini CLI
Option A: Extension install (recommended — includes hooks)
# Enable hooks (first time only — skip if already enabled)
echo '{"hooks": {"enabled": true}}' > ~/.gemini/settings.json
# Install extension
gemini extensions install https://github.com/minipuft/gemini-prompts
The extension registers the MCP server and adds hooks for >> syntax detection, chain tracking, and gate reminders. See gemini-prompts for what hooks provide.
Option B: MCP server only (manual config, no hooks)
Add to ~/.gemini/settings.json:
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest", "--client=gemini"]
}
}
}
You'll have the three MCP tools but no >> syntax detection, chain tracking, or gate reminders. To load your own prompts, add an env key — see Custom Resources.
Other Clients (Codex, Windsurf, Zed)
Add to your MCP config file with a --client preset for deterministic handoff guidance:
| Client | Config Location | Recommended --client |
|---|---|---|
| Codex | ~/.codex/config.toml | codex |
| Windsurf | ~/.codeium/windsurf/mcp_config.json | cursor (experimental) |
| Zed | ~/.config/zed/settings.json → mcp key | unknown |
JSON-based configs (Windsurf/Zed):
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest", "--client=cursor"]
}
}
}
Codex (~/.codex/config.toml):
[mcp_servers.claude_prompts]
command = "npx"
args = ["-y", "claude-prompts@latest", "--client=codex"]
Supported presets: claude-code, codex, gemini, opencode, cursor, unknown.
For complete per-client setup and limitations:
From Source (developers only)
git clone https://github.com/minipuft/claude-prompts.git
cd claude-prompts/server
npm install && npm run build && npm test
Point your MCP config to server/dist/index.js. The esbuild bundle is self-contained.
Transport options: --transport=stdio (default), --transport=streamable-http (HTTP clients).
Custom Resources
Use your own prompts, gates, methodologies, and styles. Two approaches depending on whether you want the bundled resources or not:
Option A: Own workspace (recommended for full control)
Create a workspace with starter templates, then point your MCP config to it:
npx -y claude-prompts@latest --init=~/my-prompts
This creates ~/my-prompts/resources/ with starter prompts you own. Set MCP_WORKSPACE or MCP_RESOURCES_PATH to use it. Prompts created via resource_manager are saved here. Your AI can update them through MCP — no manual editing needed.
Option B: Plugin install (bundled resources + hooks)
Plugin installs (Claude Code, OpenCode, Gemini) set MCP_WORKSPACE automatically and ship the bundled 90+ prompts, gates, and methodologies. Prompts created via resource_manager are saved to the plugin's resources directory.
[!IMPORTANT]
MCP_RESOURCES_PATHsets the base resources directory (replaces the package default).MCP_WORKSPACEenables overlay — custom resources in your workspace are loaded alongside bundled ones. Prompts, gates, methodologies, and styles with the same ID as bundled ones take priority.
Config examples per client
Claude Desktop / VS Code / Cursor (JSON with env):
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"],
"env": {
"MCP_RESOURCES_PATH": "/path/to/your/resources"
}
}
}
}
OpenCode (JSON with environment):
{
"mcp": {
"claude-prompts": {
"type": "local",
"command": ["npx", "-y", "claude-prompts@latest", "--transport=stdio"],
"environment": {
"MCP_RESOURCES_PATH": "/path/to/your/resources"
}
}
}
}
See CLI Configuration for all env vars including fine-grained path overrides (MCP_PROMPTS_PATH, MCP_GATES_PATH, etc.).
See the dashboard — system status overview
Loaded resources, active configuration, and server health at a glance
What You Get
Four resource types you author, version, and compose into workflows.
See the catalog — listing all available prompts
90 prompts across 11 categories — all hot-reloadable and versionable
Prompt Templates
Versioned YAML with hot-reload. Edit a template, test it immediately — or ask your AI to update it through MCP.
>>code_review target:'src/auth/' language:'typescript'
Validation Rules (Gates)
Criteria the AI checks its own output against. Blocking or advisory.
:: 'no false positives' :: 'cite sources with links'
Failed checks can retry automatically or pause for your decision.
[!TIP] Define your own checks. See the Gates Guide for blocking vs advisory rules, retry behavior, and shell verification.
Reasoning Guidance (Methodologies)
Frameworks that shape how the AI thinks through a problem — not just what it outputs. 6 built-in, or create your own.
@CAGEERF # Context → Analysis → Goals → Execution → Evaluation → Refinement
@ReACT # Reason → Act → Observe loops
@5W1H # Who, What, Where, When, Why, How
[!TIP] Create your own framework. See the Methodologies Guide for built-in frameworks and custom authoring.
Styles
Response formatting and tone.
#analytical # Structured, evidence-based output
#concise # Brief, action-focused
All resources are hot-reloadable, versioned with rollback history, and managed through the resource_manager tool.
[!TIP] Ready to build your own? Start with the Prompt Authoring Tutorial.
Compose Workflows
The operator syntax wires resources together — chain steps, add validation inline, hand off steps to agents.
>>review target:'src/auth/' @CAGEERF :: 'no false positives'
--> security_scan :: verify:"npm test"
--> recommendations :: 'actionable, with code'
==> implementation
See the chain — phases completing back-to-back
Phases compound reasoning across steps — each step builds on validated output from the previous one
See the output — tech evaluation chain with context7 research
Context7 fetches live library docs mid-chain — final output is a structured assessment with sources
What happened:
- Loaded the
reviewtemplate with arguments - Injected CAGEERF reasoning guidance
- Added a validation rule (AI self-evaluates against it)
- Chained output to the next step
- Ran a shell command for ground-truth validation
- Handed the final step off to a client-native subagent
[!TIP] Chains support conditional branching, context threading, and agent handoffs. Chains Lifecycle · MCP Tools Reference
Verification Loops
Ground-truth validation via shell commands — the AI keeps iterating until tests pass:
>>implement-feature :: verify:"npm test" loop:true
Implements, runs the test, reads failures, fixes, retries. Spawns a fresh context after repeated failures to avoid context rot.
| Preset | Tries | Timeout | Use Case |
|---|---|---|---|
:fast | 1 | 30s | Quick check |
:full | 5 | 5 min | CI validation |
:extended | 10 | 10 min | Large test suites |
[!TIP] Autonomous test-fix cycles. See Ralph Loops for presets, timeout configuration, and context-rot prevention.
Judge Mode
Let the AI pick the right resources for the task:
%judge Help me refactor this authentication module
Analyzes available templates, reasoning frameworks, validation rules, and styles — applies the best combination automatically.
[!TIP] How judge mode selects resources. See Judge Mode Guide for scoring, overrides, and preview with
%judge.
Run Anywhere
Author workflows as YAML templates. Export as native skills to your client.
# skills-sync.yaml — choose what to export
registrations:
claude-code:
user:
- prompt:development/review
- prompt:development/validate_work
npm run skills:export
The review prompt becomes a /review Claude Code skill. validate_work becomes /validate_work. Same source, native experience — no MCP call required at runtime.
Compiles to Claude Code skills, Cursor rules, OpenCode commands, and more. npm run skills:diff flags when exports drift from source.
See the export — dry-run compile + skill preview
Dry-run compiles YAML templates into native client skills — review before writing
[!TIP] The Skills Sync Guide covers configuration, supported clients, and drift detection.
With Hooks
Well-composed prompts carry their own structure. Hooks keep the experience consistent across models and long sessions.
What hooks do
Route operator syntax to the right tool automatically. Track workflow progress across steps and long sessions. Enforce validation rules and step handoffs between agents.
| Behavior | What happens |
|---|---|
| Prompt routing | >>analyze in conversation → correct MCP tool call |
| Chain continuity | Injects step progress and continuation between steps |
| Validation tracking | Tracks pass/fail verdicts across chain steps |
| Agent handoffs | Routes ==> steps to client-native subagents |
| Session persistence | Preserves workflow state through context compaction |
Hooks ship with the plugin install. Available for Claude Code (full), OpenCode (full), Gemini CLI (partial). Other clients: MCP tools only.
Syntax Reference
| Symbol | Name | What It Does | Example |
|---|---|---|---|
>> | Prompt | Execute template | >>code_review |
--> | Chain | Pipe to next step | step1 --> step2 |
==> | Handoff | Route step to agent | step1 ==> agent_step |
* | Repeat | Run prompt N times | >>brainstorm * 5 |
@ | Framework | Inject reasoning guidance | @CAGEERF |
:: | Gate | Add validation criteria | :: 'cite sources' |
% | Modifier | Toggle behavior | %clean, %judge |
# | Style | Apply formatting | #analytical |
Modifiers:
%clean— No framework/gate injection%lean— Gates only, skip framework%guided— Force framework injection%judge— AI selects best resources
→ MCP Tools Reference for full command documentation.
The Three Tools
| Tool | Purpose |
|---|---|
prompt_engine | Execute prompts with frameworks and validation |
resource_manager | Create, update, version, and export resources |
system_control | Status, analytics, framework switching |
prompt_engine(command:"@CAGEERF >>analysis topic:'AI safety'")
resource_manager(resource_type:"prompt", action:"list")
system_control(action:"status")
How It Works
%%{init: {'theme': 'neutral', 'themeVariables': {'background':'#0b1224','primaryColor':'#e2e8f0','primaryBorderColor':'#1f2937','primaryTextColor':'#0f172a','lineColor':'#94a3b8','fontFamily':'"DM Sans","Segoe UI",sans-serif','fontSize':'14px','edgeLabelBackground':'#0b1224'}}}%%
flowchart TB
classDef actor fill:#0f172a,stroke:#cbd5e1,stroke-width:1.5px,color:#f8fafc;
classDef server fill:#111827,stroke:#fbbf24,stroke-width:1.8px,color:#f8fafc;
classDef process fill:#e2e8f0,stroke:#1f2937,stroke-width:1.6px,color:#0f172a;
classDef client fill:#f4d0ff,stroke:#a855f7,stroke-width:1.6px,color:#2e1065;
classDef clientbg fill:#1a0a24,stroke:#a855f7,stroke-width:1.8px,color:#f8fafc;
classDef decision fill:#fef3c7,stroke:#f59e0b,stroke-width:1.6px,color:#78350f;
linkStyle default stroke:#94a3b8,stroke-width:2px
User["1. User sends command"]:::actor
Example[">>analyze @CAGEERF :: 'cite sources'"]:::actor
User --> Example --> Parse
subgraph Server["MCP Server"]
direction TB
Parse["2. Parse operators"]:::process
Inject["3. Inject framework + gates"]:::process
Render["4. Render prompt"]:::process
Decide{"6. Route verdict"}:::decision
Parse --> Inject --> Render
end
Server:::server
subgraph Client["Claude (Client)"]
direction TB
Execute["5. Run prompt + check gates"]:::client
end
Client:::clientbg
Render -->|"Prompt with gate criteria"| Execute
Execute -->|"Verdict + output"| Decide
Decide -->|"PASS → render next step"| Render
Decide -->|"FAIL → render retry prompt"| Render
Decide -->|"Done"| Result["7. Return to user"]:::actor
Command with operators → server parses and injects resources → client executes and self-evaluates → route: next step (pass), retry (fail), or return result (done).
Documentation
| I want to... | Go here |
|---|---|
| Build my first prompt | Prompt Authoring Tutorial |
| Chain multi-step workflows | Chains Lifecycle |
| Add validation to workflows | Gates Guide |
| Use or create reasoning frameworks | Methodologies Guide |
| Use autonomous verification loops | Ralph Loops |
Configure per-client MCP installs and --client presets | Client Integration Guide |
| Compare client profile mapping and limitations | Client Capabilities Reference |
| Export skills to other clients | Skills Sync |
| Configure the server | CLI & Configuration |
| Let the AI pick resources automatically | Judge Mode Guide |
| Look up MCP tool parameters | MCP Tools Reference |
| Look up prompt YAML fields | Prompt YAML Schema |
| Understand the architecture | Architecture Overview |
| Fix common issues | Troubleshooting |
Contributing
cd server
npm install
npm run build # esbuild bundles to dist/index.js
npm test # Run test suite
npm run validate:all # Full CI validation
The build produces a self-contained bundle. server/dist/ is gitignored — CI builds fresh from source.
See CONTRIBUTING.md for workflow details.
License
관련 서버
Scout Monitoring MCP
스폰서Put performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
스폰서Access financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
MCP Servers Collection
A collection of MCP servers for Claude Desktop, providing access to network tools, code linters, and Proxmox virtualization management.
Jetty.io
Work on dataset metadata with MLCommons Croissant validation and creation.
MCP RAG Server
A lightweight Python server for Retrieval-Augmented Generation (RAG) using AWS Lambda. It retrieves knowledge from external data sources like arXiv and PubMed.
sep-mpc-server
A server for processing semantic embeddings, requiring external data files mounted via a Docker volume.
Roblox Studio MCP Server
Provides AI assistants with comprehensive access to Roblox Studio projects for exploration, script analysis, debugging, and bulk editing.
MCP迭代管理工具
An iteration management tool to automate the collection and submission of iteration information to a CodeReview system.
hanabi-cli
A terminal AI chat interface for any LLM model, with file context, MCP, and deployment support.
Instant Meshes MCP
A 3D model processing server for automatic retopology, simplification, and quality analysis of OBJ/GLB models.
OPNSense MCP Server
Manage OPNsense firewalls using Infrastructure as Code (IaC) principles.
MCP Memory Gateway (rlhf-feedback-loop)
Local-first RLHF feedback loop for AI agents — capture preference signals, promote memories, block repeated mistakes, export DPO/KTO training pairs