Clarify Prompt MCP
An MCP server that transforms vague prompts into platform-optimized prompts for 58+ AI platforms across 7 categories — with support for registering custom platforms and providing markdown instruction files.
ClarifyPrompt MCP
An MCP server that transforms vague prompts into platform-optimized prompts for 58+ AI platforms across 7 categories — with support for registering custom platforms and providing markdown instruction files.
Send a raw prompt. Get back a version specifically optimized for Midjourney, DALL-E, Sora, Runway, ElevenLabs, Claude, ChatGPT, or any of the 58+ supported platforms — with the right syntax, parameters, and structure each platform expects. Register your own platforms and provide custom optimization instructions via .md files.
How It Works
You write: "a dragon flying over a castle at sunset"
ClarifyPrompt returns (for Midjourney):
"a majestic dragon flying over a medieval castle at sunset
--ar 16:9 --v 6.1 --style raw --q 2 --chaos 30 --s 700"
ClarifyPrompt returns (for DALL-E):
"A majestic dragon flying over a castle at sunset. Size: 1024x1024"
Same prompt, different platform, completely different output. ClarifyPrompt knows what each platform expects.
Quick Start
With Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"clarifyprompt": {
"command": "npx",
"args": ["-y", "clarifyprompt-mcp"],
"env": {
"LLM_API_URL": "http://localhost:11434/v1",
"LLM_MODEL": "qwen2.5:7b"
}
}
}
}
With Claude Code
claude mcp add clarifyprompt -- npx -y clarifyprompt-mcp
Set the environment variables in your shell before launching:
export LLM_API_URL=http://localhost:11434/v1
export LLM_MODEL=qwen2.5:7b
With Cursor
Add to your .cursor/mcp.json:
{
"mcpServers": {
"clarifyprompt": {
"command": "npx",
"args": ["-y", "clarifyprompt-mcp"],
"env": {
"LLM_API_URL": "http://localhost:11434/v1",
"LLM_MODEL": "qwen2.5:7b"
}
}
}
}
Supported Platforms (58+ built-in, unlimited custom)
| Category | Platforms | Default |
|---|---|---|
| Image (10) | Midjourney, DALL-E 3, Stable Diffusion, Flux, Ideogram, Leonardo AI, Adobe Firefly, Grok Aurora, Google Imagen 3, Recraft | Midjourney |
| Video (11) | Sora, Runway Gen-3, Pika Labs, Kling AI, Luma, Minimax/Hailuo, Google Veo 2, Wan, HeyGen, Synthesia, CogVideoX | Runway |
| Chat (9) | Claude, ChatGPT, Gemini, Llama, DeepSeek, Qwen, Kimi, GLM, Minimax | Claude |
| Code (9) | Claude, ChatGPT, Cursor, GitHub Copilot, Windsurf, DeepSeek Coder, Qwen Coder, Codestral, Gemini | Claude |
| Document (8) | Claude, ChatGPT, Gemini, Jasper, Copy.ai, Notion AI, Grammarly, Writesonic | Claude |
| Voice (7) | ElevenLabs, OpenAI TTS, Fish Audio, Sesame, Google TTS, PlayHT, Kokoro | ElevenLabs |
| Music (4) | Suno AI, Udio, Stable Audio, MusicGen | Suno |
Tools
optimize_prompt
The main tool. Optimizes a prompt for a specific AI platform.
{
"prompt": "a cat sitting on a windowsill",
"category": "image",
"platform": "midjourney",
"mode": "concise"
}
All parameters except prompt are optional. When category and platform are omitted, ClarifyPrompt auto-detects them from the prompt content.
Three calling modes:
| Mode | Example |
|---|---|
| Zero-config | { "prompt": "sunset over mountains" } |
| Category only | { "prompt": "...", "category": "image" } |
| Fully explicit | { "prompt": "...", "category": "image", "platform": "dall-e" } |
Parameters:
| Parameter | Required | Description |
|---|---|---|
prompt | Yes | The prompt to optimize |
category | No | chat, image, video, voice, music, code, document. Auto-detected when omitted. |
platform | No | Platform ID (e.g. midjourney, dall-e, sora, claude). Uses category default when omitted. |
mode | No | Output style: concise, detailed, structured, step-by-step, bullet-points, technical, simple. Default: detailed. |
enrich_context | No | Set true to use web search for context enrichment. Default: false. |
Response:
{
"originalPrompt": "a dragon flying over a castle at sunset",
"optimizedPrompt": "a majestic dragon flying over a medieval castle at sunset --ar 16:9 --v 6.1 --style raw --q 2 --s 700",
"category": "image",
"platform": "midjourney",
"mode": "concise",
"detection": {
"autoDetected": true,
"detectedCategory": "image",
"detectedPlatform": "midjourney",
"confidence": "high"
},
"metadata": {
"model": "qwen2.5:14b-instruct-q4_K_M",
"processingTimeMs": 3911,
"strategy": "ImageStrategy"
}
}
The detection field only appears when auto-detection was used. When category and platform are provided explicitly, detection is skipped.
list_categories
Lists all 7 categories with platform counts (built-in and custom) and defaults.
list_platforms
Lists available platforms for a given category, including custom registered platforms. Shows which is the default and whether custom instructions are configured.
list_modes
Lists all 7 output modes with descriptions.
register_platform
Register a new custom AI platform for prompt optimization.
{
"id": "my-llm",
"category": "chat",
"label": "My Custom LLM",
"description": "Internal fine-tuned model",
"syntax_hints": ["JSON mode", "max 2000 tokens"],
"instructions": "Always use structured output format",
"instructions_file": "my-llm.md"
}
| Parameter | Required | Description |
|---|---|---|
id | Yes | Unique ID (lowercase, alphanumeric with hyphens) |
category | Yes | Category this platform belongs to |
label | Yes | Human-readable platform name |
description | Yes | Short description |
syntax_hints | No | Platform-specific syntax hints |
instructions | No | Inline optimization instructions |
instructions_file | No | Path to a .md file with detailed instructions |
update_platform
Update a custom platform or add instruction overrides to a built-in platform.
For built-in platforms (e.g. Midjourney, Claude), you can add custom instructions and extra syntax hints without modifying the originals:
{
"id": "midjourney",
"category": "image",
"instructions": "Always use --v 6.1, prefer --style raw",
"syntax_hints_append": ["--no plants", "--tile for patterns"]
}
For custom platforms, all fields can be updated.
unregister_platform
Remove a custom platform or clear instruction overrides from a built-in platform.
{
"id": "my-llm",
"category": "chat"
}
For built-in platforms, use remove_override_only: true to clear your custom instructions without affecting the platform itself.
Custom Platforms & Instructions
ClarifyPrompt supports registering custom platforms and providing optimization instructions — similar to how .cursorrules or CLAUDE.md guide AI behavior.
How It Works
- Register a custom platform via
register_platform - Provide instructions inline or as a
.mdfile - Optimize prompts targeting your custom platform — instructions are injected into the optimization pipeline
Instruction Files
Instructions can be provided as markdown files stored at ~/.clarifyprompt/instructions/:
~/.clarifyprompt/
config.json # custom platforms + overrides
instructions/
my-llm.md # instructions for custom platform
midjourney-overrides.md # extra instructions for built-in platform
Example instruction file (my-llm.md):
# My Custom LLM Instructions
## Response Format
- Always output valid JSON
- Include a "reasoning" field before the answer
## Constraints
- Max 2000 tokens
- Temperature should be set low (0.1-0.3) for factual queries
## Style
- Be concise and technical
- Avoid filler phrases
Override Built-in Platforms
You can add custom instructions to any of the 58 built-in platforms using update_platform. This lets you customize how prompts are optimized for platforms like Midjourney, Claude, or Sora without modifying the defaults.
Config Directory
The config directory defaults to ~/.clarifyprompt/ and can be changed via the CLARIFYPROMPT_CONFIG_DIR environment variable. Custom platforms and overrides persist across server restarts.
LLM Configuration
ClarifyPrompt uses an LLM to optimize prompts. It works with any OpenAI-compatible API and with the Anthropic API directly.
Environment Variables
| Variable | Required | Description |
|---|---|---|
LLM_API_URL | Yes | API endpoint URL |
LLM_API_KEY | Depends | API key (not needed for local Ollama) |
LLM_MODEL | Yes | Model name/ID |
Provider Examples
Ollama (local, free):
LLM_API_URL=http://localhost:11434/v1
LLM_MODEL=qwen2.5:7b
OpenAI:
LLM_API_URL=https://api.openai.com/v1
LLM_API_KEY=sk-...
LLM_MODEL=gpt-4o
Anthropic Claude:
LLM_API_URL=https://api.anthropic.com/v1
LLM_API_KEY=sk-ant-...
LLM_MODEL=claude-sonnet-4-20250514
Google Gemini:
LLM_API_URL=https://generativelanguage.googleapis.com/v1beta/openai
LLM_API_KEY=your-gemini-key
LLM_MODEL=gemini-2.0-flash
Groq:
LLM_API_URL=https://api.groq.com/openai/v1
LLM_API_KEY=gsk_...
LLM_MODEL=llama-3.3-70b-versatile
DeepSeek:
LLM_API_URL=https://api.deepseek.com/v1
LLM_API_KEY=your-deepseek-key
LLM_MODEL=deepseek-chat
OpenRouter (any model):
LLM_API_URL=https://openrouter.ai/api/v1
LLM_API_KEY=your-openrouter-key
LLM_MODEL=anthropic/claude-sonnet-4
See .env.example for the full list of 20+ supported providers including Together AI, Fireworks, Mistral, xAI, Cohere, Perplexity, LM Studio, vLLM, LocalAI, Jan, GPT4All, and more.
Web Search (Optional)
Enable context enrichment by setting enrich_context: true in your optimize_prompt call. ClarifyPrompt will search the web for relevant context before optimizing.
Supported search providers:
| Provider | Variable | URL |
|---|---|---|
| Tavily (default) | SEARCH_API_KEY | tavily.com |
| Brave Search | SEARCH_API_KEY | brave.com/search/api |
| Serper | SEARCH_API_KEY | serper.dev |
| SerpAPI | SEARCH_API_KEY | serpapi.com |
| Exa | SEARCH_API_KEY | exa.ai |
| SearXNG (self-hosted) | — | github.com/searxng/searxng |
SEARCH_PROVIDER=tavily
SEARCH_API_KEY=your-key
Before and After
Image (Midjourney)
Before: "a cat sitting on a windowsill"
After: "a tabby cat sitting on a sunlit windowsill, warm golden hour
lighting, shallow depth of field, dust particles in light beams,
cozy interior background, shot on 35mm film, warm amber color
palette --ar 16:9 --v 6.1 --style raw --q 2"
Video (Sora)
Before: "a timelapse of a city"
After: "Cinematic timelapse of a sprawling metropolitan skyline
transitioning from golden hour to blue hour to full night.
Camera slowly dollies forward from an elevated vantage point.
Light trails from traffic appear as the city illuminates.
Clouds move rapidly overhead. Duration: 10s.
Style: documentary cinematography, 4K."
Code (Claude)
Before: "write a function to validate emails"
After: "Write a TypeScript function `validateEmail(input: string): boolean`
that validates email addresses against RFC 5322. Handle edge cases:
quoted local parts, IP address domains, internationalized domain
names. Return boolean, no exceptions. Include JSDoc with examples
of valid and invalid inputs. No external dependencies."
Music (Suno)
Before: "compose a chill lo-fi beat for studying"
After: "Compose an instrumental chill lo-fi beat for studying.
[Tempo: medium] [Genre: lo-fi] [Length: 2 minutes]"
Architecture
clarifyprompt-mcp/
src/
index.ts MCP server entry point (7 tools, 1 resource)
engine/
config/
categories.ts 7 categories, 58 platforms, 7 modes
persistence.ts ConfigStore — JSON config + .md file loading
registry.ts PlatformRegistry — merges built-in + custom platforms
llm/client.ts Multi-provider LLM client (OpenAI + Anthropic)
search/client.ts Web search (6 providers)
optimization/
engine.ts Core orchestrator + auto-detection
types.ts TypeScript interfaces
strategies/
base.ts Abstract base strategy
chat.ts 9 platforms
image.ts 10 platforms
video.ts 11 platforms
voice.ts 7 platforms
music.ts 4 platforms
code.ts 9 platforms
document.ts 8 platforms
Docker
docker build -t clarifyprompt-mcp .
docker run -e LLM_API_URL=http://host.docker.internal:11434/v1 -e LLM_MODEL=qwen2.5:7b clarifyprompt-mcp
Development
git clone https://github.com/LumabyteCo/clarifyprompt-mcp.git
cd clarifyprompt-mcp
npm install
npm run build
Test with MCP Inspector:
npx @modelcontextprotocol/inspector node dist/index.js
Set environment variables in the Inspector's "Environment Variables" section before connecting.
License
Related Servers
PowerPoint
Create PowerPoint presentations with AI-generated images using the TogetherAI API.
Ghost CMS
Automate Ghost CMS with full CRUD operations, bulk actions, and advanced features.
HubSpot MCP Server
Interact with the HubSpot CRM API for sales analysis and insights.
DeepLucid3D UCPF Server
An MCP server for advanced cognitive analysis, creative problem-solving, and structured thinking using the UCPF framework.
Napkin.AI MCP Server
MCP Server for dynamically generating infographics using Napkin.AI
Issuebage MCP Server
digital badge issuing platform
Follow Plan
Track and manage AI implementation plans.
MCP Handoff Server
Manages AI agent handoffs with structured documentation and seamless task transitions.
Linear MCP Server
Interact with the Linear API to manage issues, projects, and teams programmatically.
Xero
Interact with the accounting data in your business using our official MCP server