OpenAI GPT Image
Generate and edit images using OpenAI's GPT-4o and DALL-E APIs with advanced prompt control.
openai-gpt-image-mcp
A Model Context Protocol (MCP) tool server for OpenAI's GPT-4o/gpt-image-1 image generation and editing APIs.
- Generate images from text prompts using OpenAI's latest models.
- Edit images (inpainting, outpainting, compositing) with advanced prompt control.
- Supports: Claude Desktop, Cursor, VSCode, Windsurf, and any MCP-compatible client.
✨ Features
- create-image: Generate images from a prompt, with advanced options (size, quality, background, etc).
- edit-image: Edit or extend images using a prompt and optional mask, supporting both file paths and base64 input.
- File output: Save generated images directly to disk, or receive as base64.
🚀 Installation
git clone https://github.com/SureScaleAI/openai-gpt-image-mcp.git
cd openai-gpt-image-mcp
yarn install
yarn build
🔑 Configuration
Add to Claude Desktop or VSCode (including Cursor/Windsurf) config:
{
"mcpServers": {
"openai-gpt-image-mcp": {
"command": "node",
"args": ["/absolute/path/to/dist/index.js"],
"env": { "OPENAI_API_KEY": "sk-..." }
}
}
}
Also supports Azure deployments:
{
"mcpServers": {
"openai-gpt-image-mcp": {
"command": "node",
"args": ["/absolute/path/to/dist/index.js"],
"env": {
"AZURE_OPENAI_API_KEY": "sk-...",
"AZURE_OPENAI_ENDPOINT": "my.endpoint.com",
"OPENAI_API_VERSION": "2024-12-01-preview"
}
}
}
}
Also supports supplying an environment files:
{
"mcpServers": {
"openai-gpt-image-mcp": {
"command": "node",
"args": ["/absolute/path/to/dist/index.js", "--env-file", "./deployment/.env"]
}
}
}
⚡ Advanced
- For
create-image, setnto generate up to 10 images at once. - For
edit-image, provide a mask image (file path or base64) to control where edits are applied. - Provide an environment file with
--env-file path/to/file/.env - See
src/index.tsfor all options.
🧑💻 Development
- TypeScript source:
src/index.ts - Build:
yarn build - Run:
node dist/index.js
📝 License
MIT
🩺 Troubleshooting
- Make sure your
OPENAI_API_KEYis valid and has image API access. - You must have a verified OpenAI organization. After verifying, it can take 15–20 minutes for image API access to activate.
- File paths must be absolute.
- Unix/macOS/Linux: Starting with
/(e.g.,/path/to/image.png) - Windows: Drive letter followed by
:(e.g.,C:/path/to/image.pngorC:\path\to\image.png)
- Unix/macOS/Linux: Starting with
- For file output, ensure the directory is writable.
- If you see errors about file types, check your image file extensions and formats.
⚠️ Limitations & Large File Handling
- 1MB Payload Limit: MCP clients (including Claude Desktop) have a hard 1MB limit for tool responses. Large images (especially high-res or multiple images) can easily exceed this limit if returned as base64.
- Auto-Switch to File Output: If the total image size exceeds 1MB, the tool will automatically save images to disk and return the file path(s) instead of base64. This ensures compatibility and prevents errors like
result exceeds maximum length of 1048576. - Default File Location: If you do not specify a
file_outputpath, images will be saved to/tmp(or the directory set by theMCP_HF_WORK_DIRenvironment variable) with a unique filename. - Environment Variable:
MCP_HF_WORK_DIR: Set this to control where large images and file outputs are saved. Example:export MCP_HF_WORK_DIR=/your/desired/dir
- Best Practice: For large or production images, always use file output and ensure your client is configured to handle file paths.
📚 References
🙏 Credits
- Built with @modelcontextprotocol/sdk
- Uses openai Node.js SDK
- Built by SureScale.ai
- Contributions from Axle Research and Technology
संबंधित सर्वर
Alpha Vantage MCP Server
प्रायोजकAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Just Prompt
A unified interface for various Large Language Model (LLM) providers, including OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama.
AWS CodePipeline MCP Server
Integrates with AWS CodePipeline to manage continuous integration and delivery pipelines.
MemGPT MCP Server
A server that provides a memory system for LLMs, enabling persistent conversations with various providers like OpenAI, Anthropic, and OpenRouter.
Proteus Workflow Engine
A modern, extensible multi-agent workflow engine with real-time monitoring and a web visualization interface.
OpenGrok
OpenGrok MCP Server is a native Model Context Protocol (MCP) VS Code extension that seamlessly bridges the gap between your organization's OpenGrok indices and GitHub Copilot Chat. It arms your AI assistant with the deep, instantaneous repository context required to traverse, understand, and search massive codebases using only natural language.
Librarian
Persistent memory with semantic search, hit-based ranking, universal import, and a knowledge marketplace
Adobe After Effects MCP
An MCP server that allows AI assistants to interact with Adobe After Effects.
Linear Regression MCP
Train a Linear Regression model by uploading a CSV dataset file, demonstrating an end-to-end machine learning workflow.
Paraview_MCP
An autonomous agent that integrates large language models with ParaView for creating and manipulating scientific visualizations using natural language and visual inputs.
MCP Bridge API
A lightweight, LLM-agnostic RESTful proxy that unifies multiple MCP servers under a single API.