MCPizer
Enables AI assistants to call any REST API or gRPC service by automatically converting their schemas into MCP tools.
MCPizer
MCPizer lets your AI assistant (Claude, VS Code, etc.) call any REST API or gRPC service by automatically converting their schemas into MCP (Model Context Protocol) tools.
Key Features:
- 🚀 GitHub integration - fetch schemas directly with
github://URLs - 📄 .proto file support - use gRPC without reflection enabled
- 🔐 Private repo support - automatic authentication via
ghCLI - 🌐 Connect-RPC support - HTTP/JSON and gRPC modes
- 🔧 Auto-discovery - finds OpenAPI/Swagger endpoints automatically
What is MCPizer?
MCPizer is a server that:
- Auto-discovers API schemas from your services (OpenAPI/Swagger, gRPC reflection, .proto files)
- Converts them into tools your AI can use
- Handles all the API calls with proper types and error handling
Works with any framework that exposes OpenAPI schemas (FastAPI, Spring Boot, Express, etc.) or gRPC services (with reflection or .proto files). No code changes needed in your APIs - just point MCPizer at them!
How it Works
sequenceDiagram
participant AI as AI Assistant<br/>(Claude/VS Code)
participant MCP as MCPizer
participant API as Your APIs<br/>(REST/gRPC)
Note over AI,API: Initial Setup
MCP->>API: Auto-discover schemas
API-->>MCP: OpenAPI/gRPC reflection
MCP->>MCP: Convert to MCP tools
Note over AI,API: Runtime Usage
AI->>MCP: List available tools
MCP-->>AI: Tools from all APIs
AI->>MCP: Call tool "create_user"
MCP->>API: POST /users
API-->>MCP: {"id": 123, "name": "Alice"}
MCP-->>AI: Tool result
Architecture Overview
graph TB
subgraph "AI Assistants"
Claude[Claude Desktop]
VSCode[VS Code Extensions]
Other[Other MCP Clients]
end
subgraph "MCPizer"
Transport{Transport Layer}
Discovery[Schema Discovery]
Converter[Tool Converter]
Invoker[API Invoker]
Transport -->|STDIO/SSE| Discovery
Discovery --> Converter
Converter --> Invoker
end
subgraph "Your APIs"
FastAPI[FastAPI<br/>Auto-discovery]
Spring[Spring Boot<br/>Auto-discovery]
gRPC[gRPC Services<br/>Reflection/.proto]
Custom[Custom APIs<br/>Direct schema URL]
end
Claude --> Transport
VSCode --> Transport
Other --> Transport
Invoker --> FastAPI
Invoker --> Spring
Invoker --> gRPC
Invoker --> Custom
style MCPizer fill:#e1f5e1
style Transport fill:#fff2cc
style Discovery fill:#fff2cc
style Converter fill:#fff2cc
style Invoker fill:#fff2cc
Installation
# Install MCPizer
go install github.com/i2y/mcpizer/cmd/mcpizer@latest
# Verify installation
mcpizer --help
Usage Examples
# Use default config file (configs/mcpizer.yaml)
mcpizer
# Specify config file via command line (highest priority)
mcpizer -config=/path/to/config.yaml
# Use GitHub-hosted config
mcpizer -config=github://myorg/configs/mcpizer-prod.yaml
# Or via environment variable
export MCPIZER_CONFIG_FILE=/path/to/config.yaml
mcpizer
# STDIO mode with custom config
mcpizer -transport=stdio -config=./my-config.yaml
Note: Make sure
$GOPATH/binis in your PATH. If not installed, install Go first.
Quick Start
Step 1: Configure Your APIs
Create a config file with your API endpoints:
schema_sources:
# Production APIs with HTTPS
- https://api.mycompany.com # Auto-discovers OpenAPI
- https://api.example.com/openapi.json # Direct schema URL
# GitHub-hosted schemas (NEW: use github:// URLs)
- github://myorg/api-specs/main/user-api.yaml # Uses gh CLI auth
- github://OAI/OpenAPI-Specification/examples/v3.0/petstore.yaml@master
- https://raw.githubusercontent.com/myorg/api-specs/main/user-api.yaml # Direct URL also works
# Internal services (FastAPI, Spring Boot, etc.)
- http://my-fastapi-app:8000 # Auto-discovers at /openapi.json, /docs
- http://spring-service:8080 # Auto-discovers at /v3/api-docs
# gRPC services (must have reflection enabled)
- grpc://my-grpc-service:50051
# gRPC with .proto files (NEW! - no reflection needed)
- url: https://raw.githubusercontent.com/myorg/protos/main/service.proto
server: grpc://production.example.com:50051
# Or use github:// for private repos (uses gh CLI)
- url: github://myorg/protos/service.proto@main
server: grpc://production.example.com:50051
# Connect-RPC services (NEW!)
# If the service supports gRPC reflection:
- grpc://connect.example.com:50051
# Connect-RPC with HTTP/JSON mode:
- url: github://connectrpc/examples/eliza/eliza.proto
server: https://demo.connectrpc.com
type: connect
mode: http # Use HTTP/JSON for easier debugging
# Local development
- http://localhost:3000
- grpc://localhost:50052
# Public test APIs
- https://petstore3.swagger.io/api/v3/openapi.json
- grpc://grpcb.in:9000
Step 2: Choose Your Transport Mode
MCPizer supports two transport modes:
📝 STDIO Mode (for clients that manage process lifecycle)
Used by clients that start MCPizer as a subprocess and communicate via standard input/output.
Example: Claude Desktop
Add to your configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"mcpizer": {
"command": "mcpizer",
"args": ["-transport=stdio", "-config=/path/to/your/config.yaml"]
}
}
}
The client will start MCPizer automatically when needed.
🌐 SSE Mode (Server-Sent Events over HTTP)
Used by clients that connect to a running MCPizer server via HTTP.
# Start MCPizer server (if your client doesn't start it automatically)
mcpizer
# Server runs at http://localhost:8080/sse
Configure your MCP client to connect to http://localhost:8080/sse
Note: Some clients may start the server automatically, while others require manual startup.
🧪 For Testing/Development
# Quick test - list available tools
mcpizer -transport=stdio << 'EOF'
{"jsonrpc":"2.0","method":"tools/list","id":1}
EOF
# Interactive mode
mcpizer -transport=stdio
Usage Guide
When to Use What
| I want to... | Do this... |
|---|---|
| Use my API with Claude Desktop | Add config to claude_desktop_config.json (see Quick Start) |
| Test if my API works with MCP | Run mcpizer -transport=stdio and check tool list |
| Run as a background service | Use SSE mode with mcpizer (no args) |
| Debug connection issues | Set MCPIZER_LOG_LEVEL=debug |
| Use a private GitHub repo | Use github:// URLs (requires gh CLI) |
| Use gRPC without reflection | Use .proto files with server field |
| Multiple environments, same API | Use same schema file, different server values |
Configuration
MCPizer looks for config in this order:
-configcommand line flag (highest priority)$MCPIZER_CONFIG_FILEenvironment variableconfigs/mcpizer.yaml(default)
Supported API Types
REST APIs (OpenAPI/Swagger)
schema_sources:
# Auto-discovery from base URL
- https://api.production.com # Tries /openapi.json, /swagger.json, etc.
- http://internal-api:8000 # For internal services
# Direct schema URLs
- https://api.example.com/v3/openapi.yaml
- https://raw.githubusercontent.com/company/api-specs/main/openapi.json
Connect-RPC Services (NEW!)
schema_sources:
# Connect-RPC with gRPC reflection (if supported)
- grpc://connect.example.com:50051
# Connect-RPC with HTTP/JSON mode
- url: github://connectrpc/examples/eliza/eliza.proto
server: https://demo.connectrpc.com
type: connect
mode: http # HTTP/JSON mode (default)
# Connect-RPC with gRPC mode
- url: https://raw.githubusercontent.com/myorg/protos/service.proto
server: grpc://connect.example.com:50051
type: connect
mode: grpc # Use gRPC transport
Connect-RPC features:
- HTTP/JSON mode: Human-readable, works with curl and browser tools
- gRPC mode: Binary protocol, more efficient
- Dual support: Same service can be accessed via both modes
- No proxy needed: Direct HTTP/JSON communication
Separate Schema Files and API Servers
MCPizer supports OpenAPI schema files that are hosted separately from the actual API server. This is useful when:
- The API doesn't expose its own schema - You can write an OpenAPI spec for any API
- Schema is managed separately - Documentation team maintains schemas independently
- Multiple environments - One schema file for dev/staging/production APIs
How it works:
schema_sources:
# Schema file points to production API
- https://docs.company.com/api/v1/openapi.yaml
# Local schema file for external API
- ./schemas/third-party-api.yaml
The OpenAPI spec contains server URLs:
servers:
- url: https://api.production.com
description: Production server
- url: https://api.staging.com
description: Staging server
MCPizer will:
- Fetch the schema from the schema_sources URL
- Read the
serverssection from the OpenAPI spec - Use the first available server URL for actual API calls
Example: Creating OpenAPI spec for an API without documentation
If you have an API at https://internal-api.company.com that doesn't provide OpenAPI:
- Write your own OpenAPI spec:
openapi: 3.0.0
info:
title: Internal API
version: 1.0.0
servers:
- url: https://internal-api.company.com
paths:
/users:
get:
summary: List users
responses:
'200':
description: Success
content:
application/json:
schema:
type: array
items:
type: object
properties:
id: {type: integer}
name: {type: string}
- Host it anywhere:
- GitHub:
https://raw.githubusercontent.com/yourorg/specs/main/api.yaml - S3/CDN:
https://cdn.company.com/api-specs/v1/openapi.json - Local file:
./schemas/third-party-api.yaml
- GitHub:
- Point MCPizer to your schema file
Auto-Discovery Process
graph TD
Start["Base URL provided:<br/>http://your-api:8000"]
Try1["/openapi.json<br/>FastAPI default"]
Try2["/docs/openapi.json<br/>FastAPI alt"]
Try3["/swagger.json<br/>Swagger 2.0"]
Try4["/v3/api-docs<br/>Spring Boot"]
Try5["...more paths..."]
Found["✓ Schema found!<br/>Parse and convert"]
NotFound["✗ Not found<br/>Try direct URL"]
Start --> Try1
Try1 -->|404| Try2
Try2 -->|404| Try3
Try3 -->|404| Try4
Try4 -->|404| Try5
Try1 -->|200| Found
Try2 -->|200| Found
Try3 -->|200| Found
Try4 -->|200| Found
Try5 -->|All fail| NotFound
style Start fill:#e3f2fd
style Found fill:#c8e6c9
style NotFound fill:#ffcdd2
Supported frameworks:
- FastAPI:
/openapi.json,/docs/openapi.json - Spring Boot:
/v3/api-docs,/swagger-ui/swagger.json - Express/NestJS:
/api-docs,/swagger.json - Rails:
/api/v1/swagger.json,/apidocs - See full list
gRPC Services
schema_sources:
# Using gRPC reflection (requires reflection enabled on server)
- grpc://your-grpc-host:50051 # Your service
- grpc://grpcb.in:9000 # Public test service
# Using .proto files (NEW! - no reflection needed)
- url: https://raw.githubusercontent.com/grpc/grpc-go/master/examples/helloworld/helloworld/helloworld.proto
server: grpc://production.example.com:50051
# Private GitHub .proto files (uses gh CLI authentication)
- url: github://myorg/protos/user-service.proto
server: grpc://user-service:50051
# With specific branch/tag
- url: github://grpc/grpc-go/examples/helloworld/helloworld/[email protected]
server: grpc://production.example.com:50051
Option 1: gRPC Reflection (requires reflection enabled):
// In your gRPC server
import "google.golang.org/grpc/reflection"
reflection.Register(grpcServer)
Option 2: .proto Files (NEW! - more secure, no reflection needed):
- Host your
.protofiles anywhere (GitHub, S3, CDN, etc.) - GitHub URLs (
github://) automatically useghCLI authentication - Specify the
serverendpoint separately - Perfect for production where reflection is disabled
- Allows schema versioning and CI/CD validation
For alternative reflection implementations, see:
- connectrpc/grpcreflect-go Connect-Go's reflection implementation
Local Files
schema_sources:
- ./api-spec.json
- /path/to/openapi.yaml
GitHub Integration (NEW!)
MCPizer can fetch schemas directly from GitHub repositories using the gh CLI tool - including both OpenAPI and .proto files:
schema_sources:
# OpenAPI schemas from GitHub
- github://owner/repo/path/to/openapi.yaml
- github://microsoft/api-guidelines/graph/[email protected]
# .proto files from GitHub (NEW!)
- url: github://grpc/grpc-go/examples/helloworld/helloworld/helloworld.proto@master
server: grpc://production.example.com:50051
# Private repositories (uses gh CLI authentication)
- github://myorg/private-apis/user-api.yaml
- url: github://myorg/private-protos/[email protected]
server: grpc://internal-service:50051
# Load MCPizer config itself from GitHub!
# Set MCPIZER_CONFIG_FILE=github://myorg/configs/mcpizer.yaml
Benefits:
- ✅ Works with private repositories (uses
ghauthentication) - ✅ Specify branches/tags with
@refsyntax - ✅ No need to manage raw GitHub URLs or tokens
- ✅ Supports both OpenAPI and .proto files
- ✅ Config files can also be stored in GitHub
Requirements:
- Install GitHub CLI:
brew install gh(macOS) or see docs - Authenticate:
gh auth login
Environment Variables
| Variable | Default | When to use |
|---|---|---|
MCPIZER_CONFIG_FILE | ~/.mcpizer.yaml | Different config per environment Can be github:// URL! |
MCPIZER_LOG_LEVEL | info | Set to debug for troubleshooting |
MCPIZER_LOG_FILE | /tmp/mcpizer.log | Change log location (STDIO mode) |
MCPIZER_LISTEN_ADDR | :8080 | Change port (SSE mode) |
MCPIZER_HTTP_CLIENT_TIMEOUT | 30s | Slow APIs need more time |
Common Scenarios
"I want Claude to use my local FastAPI app"
# 1. Your FastAPI runs on port 8000
python -m uvicorn main:app
# 2. Install MCPizer
go install github.com/i2y/mcpizer/cmd/mcpizer@latest
# 3. Configure (~/.mcpizer.yaml)
echo "schema_sources:\n - http://localhost:8000" > ~/.mcpizer.yaml
# 4. Add to Claude Desktop config and restart
# Now ask Claude: "What endpoints are available?"
"I want to test if MCPizer sees my API"
# Quick check - what tools are available?
mcpizer -transport=stdio << 'EOF'
{"jsonrpc":"2.0","method":"tools/list","id":1}
EOF
# Should list all your API endpoints as tools
"My API needs authentication"
# For APIs that require authentication headers
schema_sources:
# Object format with headers (for fetching schemas)
- url: https://api.example.com/openapi.json
headers:
Authorization: "Bearer YOUR_API_TOKEN"
X-API-Key: "YOUR_API_KEY"
# GitHub private repos (automatic auth via gh CLI)
- github://myorg/private-apis/openapi.yaml # No headers needed!
- url: github://myorg/private-protos/api.proto # gh handles auth
server: grpc://api.example.com:50051
# Simple format (no auth required)
- https://public-api.example.com/swagger.json
Note: These headers are used when fetching the schema files. Headers required for actual API calls should be defined in the OpenAPI spec itself.
"I'm getting 'no tools available'"
# 1. Check if your API is running
curl http://localhost:8000/openapi.json # Should return JSON
# 2. Run with debug logging
MCPIZER_LOG_LEVEL=debug mcpizer -transport=stdio
# 3. Check the log file
tail -f /tmp/mcpizer.log
"I want to use my company's gRPC services"
Option 1: If reflection is enabled
# Simple - just point to the service
schema_sources:
- grpc://my-service:50051
Option 2: Using .proto files (recommended)
# More secure - no reflection needed in production
schema_sources:
# From GitHub (private repos supported)
- url: github://mycompany/protos/[email protected]
server: grpc://user-service.prod:443
# From any HTTPS URL
- url: https://cdn.mycompany.com/schemas/order-service.proto
server: grpc://order-service.prod:443
"I want to run MCPizer as a service"
Option 1: Direct binary execution
# Run in background with specific config
mcpizer -config /etc/mcpizer/production.yaml &
# Or use systemd (create /etc/systemd/system/mcpizer.service)
[Unit]
Description=MCPizer MCP Server
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/mcpizer
Environment="MCPIZER_CONFIG_FILE=/etc/mcpizer/production.yaml"
Restart=always
User=mcpizer
[Install]
WantedBy=multi-user.target
Troubleshooting
Debug Commands
# See what's happening
MCPIZER_LOG_LEVEL=debug mcpizer -transport=stdio
# Watch logs (STDIO mode)
tail -f /tmp/mcpizer.log
# Test your API is accessible
curl http://your-api-host:8000/openapi.json
# Test gRPC reflection
grpcurl -plaintext your-grpc-host:50051 list
Common Issues
| Problem | Solution |
|---|---|
| "No tools available" | • Check API is running • Try direct schema URL • Check debug logs |
| "Connection refused" | • Wrong port? • Check if API is running • Firewall blocking? |
| "String should have at most 64 characters" | Update MCPizer - this is fixed in latest version |
| gRPC "connection refused" | • Enable reflection in your gRPC server • Check with grpcurl• Or use .proto file approach instead |
| "Schema not found at base URL" | • Specify exact schema path • Check if API exposes OpenAPI |
| ".proto file missing server" | • Add server: grpc://host:port to your config• Required for .proto files |
Examples
Complete Flow Example
Here's how MCPizer works with a FastAPI service:
flowchart LR
subgraph "Your FastAPI App"
API[FastAPI Service<br/>Port 8000]
Schema["/openapi.json<br/>Auto-generated"]
API --> Schema
end
subgraph "MCPizer Config"
Config["~/.mcpizer.yaml<br/>schema_sources:<br/>http://my-fastapi:8000"]
end
subgraph "MCPizer Process"
Discover["(1) Discover schema<br/>at /openapi.json"]
Convert["(2) Convert endpoints<br/>to MCP tools"]
Register["(3) Register tools<br/>with MCP protocol"]
Discover --> Convert
Convert --> Register
end
subgraph "AI Assistant"
List["List tools:<br/>• get_item<br/>• create_item<br/>• update_item"]
Call["Call: get_item<br/>{item_id: 123}"]
Result["Result:<br/>{id: 123, name: 'Test'}"]
List --> Call
Call --> Result
end
Config --> Discover
Schema --> Discover
Register --> List
Call -->|HTTP GET /items/123| API
API -->|JSON Response| Result
style API fill:#e8f4fd
style Config fill:#fff4e6
style Register fill:#e8f5e9
style Result fill:#f3e5f5
FastAPI Example
# main.py
from fastapi import FastAPI
app = FastAPI()
@app.get("/items/{item_id}")
def get_item(item_id: int, q: str = None):
return {"item_id": item_id, "q": q}
# MCPizer auto-discovers at http://localhost:8000/openapi.json
gRPC Example
Option 1: Using Reflection
// Enable reflection for MCPizer
import "google.golang.org/grpc/reflection"
func main() {
s := grpc.NewServer()
pb.RegisterYourServiceServer(s, &server{})
reflection.Register(s) // This line enables MCPizer support
s.Serve(lis)
}
Option 2: Using .proto Files (Recommended for Production)
# config.yaml
schema_sources:
# Your .proto file in version control
- url: github://myorg/protos/[email protected]
server: grpc://user-service.prod.example.com:443
# Multiple environments, same schema
- url: github://myorg/protos/[email protected]
server: grpc://user-service.staging.example.com:443
Benefits:
- ✅ No reflection needed in production
- ✅ Version-controlled schemas
- ✅ CI/CD can validate schemas
- ✅ Same .proto for multiple environments
Development
# Run tests
go test ./...
# Run integration tests (requires internet connection)
go test -tags=integration ./...
# Build locally
go build -o mcpizer ./cmd/mcpizer
# Run with example services (includes Petstore, gRPC test service, Jaeger)
docker compose up
# Run individual examples
cd examples/fastapi && pip install -r requirements.txt && python main.py
See examples/ for more complete examples:
- proto-config.yaml - Using .proto files with multiple environments
- fastapi/ - FastAPI integration example
- grpc-service/ - gRPC service with reflection
Contributing
Contributions welcome! Please:
- Check existing issues first
- Fork and create a feature branch
- Add tests for new functionality
- Submit a PR
License
MIT - see LICENSE
संबंधित सर्वर
Alpha Vantage MCP Server
प्रायोजकAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Baby-SkyNet
An autonomous memory management system for Claude AI, featuring multi-provider LLM integration and a persistent memory database.
GitHub MCP Server
Repository analysis, issues, pull requests, and code structure exploration
BCMS MCP
Give me a one - two sentence description of the BCMS MCP # MCP The BCMS Model Context Protocol (MCP) integration enables AI assistants like Claude, Cursor, and other MCP-compatible tools to interact directly with your BCMS content. This allows you to create, read, and update content entries, manage media files, and explore your content structure—all through natural language conversations with AI. ## What is MCP? The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is an open standard developed by Anthropic that allows AI applications to securely connect to external data sources and tools. With BCMS MCP support, you can leverage AI assistants to: - Query and explore your content structure - Create new content entries with AI-generated content - Update existing entries - Manage your media library - Get intelligent suggestions based on your content model --- ## Getting Started ### Prerequisites 1. A BCMS account with an active instance 2. An MCP key with appropriate permissions 3. An MCP-compatible client (Claude Desktop, Cursor, or any MCP client) ### Step 1: Create an MCP Key 1. Navigate to your BCMS dashboard 2. Go to Settings → MCP 3. Click Create MCP Key 4. Configure the permissions for templates you want the AI to access:GET: Read entries 5. POST: Create entries 6. PUT: Update entries 7. DELETE: Delete entries Note: Right now, MCP only supports creating, reading and updating content. ### Step 2: Configure Your MCP Client You can find full instructions for integrating BCMS with your AI tools right inside BCMS, on the MCP page. But in general, installing BCMS MCP works in a standard way: ``` { "mcpServers": { "bcms": { "url": "https://app.thebcms.com/api/v3/mcp?mcpKey=YOUR_MCP_KEY" } } } ``` ## Available Tools Once connected, your AI assistant will have access to the following tools based on your MCP key permissions: ### Content Discovery #### list_templates_and_entries Lists all templates and their entries that you have access to. This is typically the first tool to call when exploring your BCMS content. Returns: - Template IDs, names, and slugs - Entry IDs with titles and slugs for each language Example prompt: "Show me all the templates and entries in my BCMS" --- ### Entry Management #### list_entries_for_{templateId} Retrieves all entries for a specific template with full content data. A separate tool is generated for each template you have access to. Returns: - Complete entry data including all meta fields - Content in all configured languages - Entry statuses Example prompt: "List all blog posts from my Blog template" --- #### create_entry_for_{templateId} Creates a new entry for a specific template. The input schema is dynamically generated based on your template's field structure. Input: - statuses: Array of status assignments per language - meta: Array of metadata for each language (title, slug, custom fields) - content: Array of content nodes for each language Example prompt: "Create a new blog post titled 'Getting Started with BCMS' with a brief introduction paragraph" --- #### update_entry_for_{templateId} Updates an existing entry for a specific language. Input: - entryId: The ID of the entry to update - lng: Language code (e.g., "en") - status: Optional status ID - meta: Updated metadata - content: Updated content nodes Example prompt: "Update the introduction paragraph of my 'Getting Started' blog post" --- ### Media Management #### list_all_media Lists all media files in your media library. Returns: - Media IDs, names, and types - File metadata (size, dimensions for images) - Parent directory information Example prompt: "Show me all images in my media library" --- #### list_media_dirs Lists the directory structure of your media library. Returns: - Hierarchical directory structure - Directory IDs and names Example prompt: "Show me the folder structure of my media library" --- #### create-media-directory Creates a new directory in your media library. Input: - name: Name of the directory - parentId: Optional parent directory ID (root if not specified) Example prompt: "Create a new folder called 'Blog Images' in my media library" --- #### request-upload-media-url Returns a URL you use to upload a file (for example via POST with multipart form data), which avoids pushing large binaries through the MCP tool payload. You still need a valid file name and MIME type when uploading, as described in the tool response. Availability: Only when the MCP key has Can mutate media enabled. Example prompt: “Give me an upload URL for a new hero image, then tell me how to upload it.” Input: - fileName: Name of the file with extension - fileData: Base64-encoded file data (with data URI prefix) - parentId: Optional parent directory ID Example prompt: "Upload this image to my Blog Images folder" --- ### Linking Tools #### get_entry_pointer_link Generates an internal BCMS link to an entry for use in content. Input: - entryId: The ID of the entry to link to Returns: - Internal link format: entry:{entryId}@*_{templateId}:entry Example prompt: "Get me the internal link for the 'About Us' page entry" --- #### get_media_pointer_link Generates an internal BCMS link to a media item for use in content. Input: - mediaId: The ID of the media item Returns: - Internal link format: media:{mediaId}@*_@*_:entry Example prompt: "Get the link for the hero image so I can use it in my blog post" --- ## Content Structure ### Entry Content Nodes When creating or updating entries, content is structured as an array of nodes. Supported node types include: Type Description paragraph Standard text paragraph heading Heading (h1-h6) bulletList Unordered list orderedList Numbered list listItem List item codeBlock Code block with syntax highlighting blockquote Quote block image Image node widget Custom widget with props ### Example Content Structure ``` { "content": [ { "lng": "en", "nodes": [ { "type": "heading", "attrs": { "level": 1 }, "content": [ { "type": "text", "text": "Welcome to BCMS" } ] }, { "type": "paragraph", "content": [ { "type": "text", "text": "This is your first paragraph." } ] } ] } ] } ``` ## Security & Permissions ### MCP Key Scopes Your MCP key controls what the AI can access: - Template Access: Only templates explicitly granted in the MCP key are visible - Operation Permissions: Each template can have independent GET/POST/PUT/DELETE permissions - Media Access: Media operations are controlled separately ### Best Practices 1. Principle of Least Privilege: Only grant the permissions needed for your use case 2. Separate Keys: Create different MCP keys for different purposes or team members 3. Regular Rotation: Periodically rotate your MCP keys ## Use Cases ### Content Creation Workflows Blog Post Creation "Create a new blog post about the benefits of headless CMS. Include an introduction, three main benefits with explanations, and a conclusion. Use the Blog template." Product Updates "Update the price field for all products in the Electronics category to apply a 10% discount" ### Content Exploration Content Audit "List all blog posts that don't have a featured image set" Translation Status "Show me which entries are missing German translations" ### Media Organization Library Cleanup "Show me all unused images in the media library" Folder Setup "Create folder structure for: Products > Categories > Electronics, Clothing, Home" ## Troubleshooting ### Common Issues #### "MCP key not found" - Verify your MCP key format: keyId.keySecret.instanceId - Ensure the MCP key hasn't been deleted or deactivated - Check that you're using the correct instance #### "MCP key does not have access to template" - Review your MCP key permissions in the dashboard - Ensure the required operation (GET/POST/PUT/DELETE) is enabled for the template #### Session Expired - MCP sessions may timeout after periods of inactivity - Simply start a new conversation to establish a fresh session ### Getting Help - Documentation: [thebcms.com/docs](https://thebcms.com/docs) - Support: [[email protected]](mailto:[email protected]) - Community: [Join BCMS Discord](https://discord.com/invite/SYBY89ccaR) for community support ## Technical Reference ### Endpoint POST https://app.thebcms.com/api/v3/mcp?mcpKey={MCP_KEY} ### Transport BCMS MCP uses the Streamable HTTP transport with session management. Sessions are maintained via the mcp-session-id header. ### Response Format All tools return structured JSON responses conforming to the MCP specification with: - content: Array of content blocks - structuredContent: Typed response data ## Rate Limits MCP requests are subject to the same rate limits as API requests: - Requests are tracked per MCP key - Contact support if you need higher limits for production workloads
x-twitter-scraper
X (Twitter) data platform skill for AI coding agents. 111 REST API endpoints, 2 MCP tools, 23 extraction types, HMAC webhooks. Read tweets from $0.00015 per call (33x cheaper than the official X API). Pay-per-use via Machine Payments Protocol or Xquik API key.
FastMCP-Scala
A Scala 3 library for building Model Context Protocol (MCP) servers.
UnClick
AI agent tool marketplace with 60+ tools - developer utilities, social media, e-commerce, and finance. Zero dependencies, all native fetch.
Azure DevOps MCP Server for Cursor
An MCP server for Azure DevOps with tools for project management, work items, pull requests, builds, tests, and more.
Ghost MCP
An MCP server for the Ghost blogging platform with Server-Sent Events (SSE) support.
Remote MCP Server (Authless)
An example of a remote MCP server deployable on Cloudflare Workers without authentication.
Interactive Feedback MCP
An MCP server for interactive user feedback and command execution in AI-assisted development.