graphql-to-mcp
Turn any GraphQL API into MCP tools. Auto-introspection, flat schemas.
graphql-to-mcp
Turn any GraphQL API into MCP tools — zero config, zero code.
Point graphql-to-mcp at a GraphQL endpoint and it auto-generates one MCP tool per query/mutation via introspection. Works with Claude Desktop, Cursor, Windsurf, and any MCP client.
Quick Start
Try it now — no install needed:
npx graphql-to-mcp https://countries.trevorblades.com/graphql
Or add to Claude Desktop / Cursor config:
{
"mcpServers": {
"countries": {
"command": "npx",
"args": ["-y", "graphql-to-mcp", "https://countries.trevorblades.com/graphql"]
}
}
}
That's it. Claude can now query countries, continents, and languages.
Features
- Zero config — just provide a GraphQL endpoint URL
- Auto-introspection — discovers all queries and mutations automatically
- Flat parameter schemas — nested
inputobjects are flattened for better LLM accuracy - Smart truncation — large responses are intelligently pruned (array slicing + depth limiting)
- Auth support — Bearer tokens, API keys (header or query)
- Retry logic — automatic retries on 429/5xx with exponential backoff
- Include/exclude filters — expose only the operations you want
- Schema caching — skip re-introspection with
--schema-cachefor faster startup - Mutation safety — auto-detect destructive mutations (
delete*,remove*, etc.) and warn or block them
Usage
CLI
# Public API (no auth)
npx graphql-to-mcp https://countries.trevorblades.com/graphql
# With bearer token
npx graphql-to-mcp https://api.github.com/graphql --bearer ghp_xxxxx
# With API key
npx graphql-to-mcp https://api.example.com/graphql --api-key "X-API-Key:your-key:header"
# Filter operations
npx graphql-to-mcp https://api.example.com/graphql --include "get*" --exclude "internal*"
# With prefix (avoid name collisions when using multiple APIs)
npx graphql-to-mcp https://api.example.com/graphql --prefix myapi
# Cache schema locally for faster restarts
npx graphql-to-mcp https://api.example.com/graphql --schema-cache ./schema.json
# Force re-introspection (ignore cache)
npx graphql-to-mcp https://api.example.com/graphql --schema-cache ./schema.json --force-refresh
# Block destructive mutations (delete*, remove*, etc.)
npx graphql-to-mcp https://api.example.com/graphql --mutation-safety safe
Claude Desktop / Cursor Config
{
"mcpServers": {
"github": {
"command": "npx",
"args": [
"-y", "graphql-to-mcp",
"https://api.github.com/graphql",
"--bearer", "ghp_xxxxx",
"--prefix", "github"
]
}
}
}
Programmatic
import { createServer } from "graphql-to-mcp";
const server = await createServer({
endpoint: "https://api.example.com/graphql",
auth: { type: "bearer", token: "xxx" },
include: ["getUser", "listUsers"],
});
How It Works
- Introspect — Fetches the GraphQL schema via introspection query
- Flatten — Nested
InputObjecttypes are flattened into simple key-value parameters (e.g.,input.name→input_name) - Generate — Each query/mutation becomes an MCP tool with a flat JSON Schema
- Execute — When an LLM calls a tool, the flat args are reconstructed into proper GraphQL variables and sent to your endpoint
Why Flat Schemas?
LLMs are significantly better at filling flat key-value parameters than deeply nested JSON objects. By flattening InputObject types, we get:
- Higher accuracy in parameter filling
- Fewer hallucinated nested structures
- Better compatibility across different LLM providers
Options
| Option | Description | Default |
|---|---|---|
--bearer <token> | Bearer token auth | — |
--api-key <name:value:in> | API key auth | — |
-H, --header <name:value> | Custom header (repeatable) | — |
--include <pattern> | Include only matching operations | all |
--exclude <pattern> | Exclude matching operations | none |
--prefix <name> | Tool name prefix | — |
--timeout <ms> | Request timeout | 30000 |
--max-retries <n> | Retry on 429/5xx | 3 |
--transport <stdio|sse> | MCP transport | stdio |
--schema-cache <path> | Save/load introspection cache | — |
--force-refresh | Ignore cache, re-introspect | false |
--mutation-safety <mode> | warn | safe | unrestricted | warn |
Smart Truncation
GraphQL APIs can return large payloads that overwhelm LLM context windows. graphql-to-mcp automatically:
- Slices arrays to 20 items (with metadata showing total count)
- Prunes depth beyond 5 levels (with object/array summaries)
- Hard truncates at 50K characters as a safety net
Schema Caching
Introspection queries can be slow on large schemas. Use --schema-cache to save the introspection result locally:
# First run: introspects and saves to cache
npx graphql-to-mcp https://api.example.com/graphql --schema-cache ./schema.json
# Subsequent runs: loads from cache (instant startup)
npx graphql-to-mcp https://api.example.com/graphql --schema-cache ./schema.json
# Force re-introspection when the API schema changes
npx graphql-to-mcp https://api.example.com/graphql --schema-cache ./schema.json --force-refresh
The cache file stores the endpoint URL and timestamp. If you point at a different endpoint, it automatically re-introspects.
Mutation Safety
By default, graphql-to-mcp detects destructive mutations and adds warnings to their descriptions. This helps LLMs understand the risk before executing them.
Detected patterns: delete*, remove*, drop*, clear*, truncate*, destroy*, purge*, reset* (case-insensitive).
| Mode | Behavior |
|---|---|
warn (default) | Adds "DESTRUCTIVE:" prefix to dangerous mutation descriptions |
safe | Completely excludes dangerous mutations from the tool list |
unrestricted | No filtering or warnings (previous behavior) |
# Safe mode: only expose read queries + non-destructive mutations
npx graphql-to-mcp https://api.example.com/graphql --mutation-safety safe
# Unrestricted: expose everything (use with caution)
npx graphql-to-mcp https://api.example.com/graphql --mutation-safety unrestricted
Use with REST APIs Too
Pair with mcp-openapi to give Claude access to both REST and GraphQL APIs:
{
"mcpServers": {
"github-graphql": {
"command": "npx",
"args": ["-y", "graphql-to-mcp", "https://api.github.com/graphql", "--bearer", "ghp_xxx", "--prefix", "gh"]
},
"petstore-rest": {
"command": "npx",
"args": ["-y", "mcp-openapi", "https://petstore3.swagger.io/api/v3/openapi.json"]
}
}
}
Related
- mcp-openapi — Same zero-config approach for REST/OpenAPI APIs
License
MIT
İlgili Sunucular
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
TestDino MCP
A Model Context Protocol (MCP) server that connects TestDino to AI agents. This server enables you to interact with your TestDino test data directly through natural language commands.
aidemd-mcp
Structured .aide spec files that give AI agents progressive disclosure into your codebase architecture. 6 MCP tools, 8 slash commands, TUI wizard, multi-IDE support.
CodeSeeker
Graph-powered code intelligence MCP server with semantic search, knowledge graph, and dependency analysis for Claude Code, Cursor, and Copilot.
TMUX
Lets agents create sessions, split panes, run commands, and capture output with TMUX
JProfiler
München
OpenOcean Finance
An MCP server for executing token swaps across multiple decentralized exchanges using OpenOcean's aggregation API
Fetter MCP
Get the most-recent Python package without vulnerabilities, and more.
esp-mcp
An MCP server for ESP-IDF workflows, enabling project builds, firmware flashing, and automated issue resolution from build logs.
Shipyard
The Shipyard CLI provides an MCP server for agents to manage Shipyard environments directly: by pulling logs, comparing branches, running tests, and stopping/starting environments..
Memori MCP
With Memori's MCP server, your agent can retrieve relevant memories before answering and store durable facts after responding, keeping context across sessions without any SDK integration.