cross-llm-mcp
A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, Gemini, and DeepSeek.
🤖 Cross-LLM MCP Server
Access multiple LLM APIs from one place. Call ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral, and Hugging Face Inference Router with intelligent model selection, preferences, and prompt logging.
An MCP (Model Context Protocol) server that provides unified access to multiple Large Language Model APIs for AI coding environments like Cursor and Claude Desktop.
Why Use Cross-LLM MCP?
- 🌐 9 LLM Providers – ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral, Hugging Face
- 🎯 Smart Model Selection – Tag-based preferences (coding, business, reasoning, math, creative, general)
- 📊 Prompt Logging – Track all prompts with history, statistics, and analytics
- 💰 Cost Optimization – Choose flagship or cheaper models based on preference
- ⚡ Easy Setup – One-click install in Cursor or simple manual setup
- 🔄 Call All LLMs – Get responses from all providers simultaneously
Quick Start
Ready to access multiple LLMs? Install in seconds:
Install in Cursor (Recommended):
Or install manually:
npm install -g cross-llm-mcp
# Or from source:
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp && npm install && npm run build
Features
🤖 Individual LLM Tools
call-chatgpt– OpenAI's ChatGPT APIcall-claude– Anthropic's Claude APIcall-deepseek– DeepSeek APIcall-gemini– Google's Gemini APIcall-grok– xAI's Grok APIcall-kimi– Moonshot AI's Kimi APIcall-perplexity– Perplexity AI APIcall-mistral– Mistral AI APIcall-huggingface– Hugging Face Inference Router (OpenAI-compatible Hub models)
🔄 Combined Tools
call-all-llms– Call all LLMs with the same promptcall-llm– Call a specific provider by name
⚙️ Preferences & Model Selection
get-user-preferences– Get current preferencesset-user-preferences– Set default model, cost preference, and tag-based preferencesget-models-by-tag– Find models by tag (coding, business, reasoning, math, creative, general)
📝 Prompt Logging
get-prompt-history– View prompt history with filtersget-prompt-stats– Get statistics about prompt logsdelete-prompt-entries– Delete log entries by criteriaclear-prompt-history– Clear all prompt logs
Installation
Cursor (One-Click)
Click the install link above or use:
cursor://anysphere.cursor-deeplink/mcp/install?name=cross-llm-mcp&config=eyJjcm9zcy1sbG0tbWNwIjp7ImNvbW1hbmQiOiJucHgiLCJhcmdzIjpbIi15IiwiY3Jvc3MtbGxtLW1jcCJdfX0=
After installation, add your API keys in Cursor settings (see Configuration below).
Manual Installation
Requirements: Node.js 18+ and npm
# Clone and build
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp
npm install
npm run build
Claude Desktop
Add to claude_desktop_config.json:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"cross-llm-mcp": {
"command": "node",
"args": ["/absolute/path/to/cross-llm-mcp/build/index.js"],
"env": {
"OPENAI_API_KEY": "your_openai_api_key_here",
"ANTHROPIC_API_KEY": "your_anthropic_api_key_here",
"DEEPSEEK_API_KEY": "your_deepseek_api_key_here",
"GEMINI_API_KEY": "your_gemini_api_key_here",
"XAI_API_KEY": "your_grok_api_key_here",
"KIMI_API_KEY": "your_kimi_api_key_here",
"PERPLEXITY_API_KEY": "your_perplexity_api_key_here",
"MISTRAL_API_KEY": "your_mistral_api_key_here",
"HF_TOKEN": "your_huggingface_token_here"
}
}
}
}
Restart Claude Desktop after configuration.
Configuration
API Keys
Set environment variables for the LLM providers you want to use:
export OPENAI_API_KEY="your_openai_api_key"
export ANTHROPIC_API_KEY="your_anthropic_api_key"
export DEEPSEEK_API_KEY="your_deepseek_api_key"
export GEMINI_API_KEY="your_gemini_api_key"
export XAI_API_KEY="your_grok_api_key"
export KIMI_API_KEY="your_kimi_api_key"
export PERPLEXITY_API_KEY="your_perplexity_api_key"
export MISTRAL_API_KEY="your_mistral_api_key"
export HF_TOKEN="your_huggingface_token"
# Or: HUGGINGFACE_API_KEY (same as HF_TOKEN)
# Optional: DEFAULT_HUGGINGFACE_MODEL, HUGGINGFACE_INFERENCE_BASE_URL (default https://router.huggingface.co/v1)
Getting API Keys
- OpenAI: https://platform.openai.com/api-keys
- Anthropic: https://console.anthropic.com/
- DeepSeek: https://platform.deepseek.com/
- Google Gemini: https://makersuite.google.com/app/apikey
- xAI Grok: https://console.x.ai/
- Moonshot AI: https://platform.moonshot.ai/
- Perplexity: https://www.perplexity.ai/hub
- Mistral: https://console.mistral.ai/
- Hugging Face: Create a fine-grained token with Inference (serverless / Inference Providers) access at https://huggingface.co/settings/tokens. See Chat Completion for supported models.
Running Hub models locally (outside this MCP)
This server calls Hugging Face’s hosted Inference Router; it does not download weights or run PyTorch/GGUF inside Node. To run models on your machine, use tools such as Ollama, llama.cpp, Text Generation Inference, or Hugging Face Inference Endpoints, then point other clients at those services if they expose an API.
Usage Examples
Call ChatGPT
Get a response from OpenAI:
{
"tool": "call-chatgpt",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"temperature": 0.7,
"max_tokens": 500
}
}
Call Hugging Face
Get a response from a Hub model via the Inference Router (model is the Hub repo id, e.g. Qwen/Qwen2.5-7B-Instruct):
{
"tool": "call-huggingface",
"arguments": {
"prompt": "Reply with exactly: ok",
"model": "Qwen/Qwen2.5-7B-Instruct",
"temperature": 0.3,
"max_tokens": 32
}
}
Call All LLMs
Get responses from all providers:
{
"tool": "call-all-llms",
"arguments": {
"prompt": "Write a short poem about AI",
"temperature": 0.8
}
}
Set Tag-Based Preferences
Automatically use the best model for each task type:
{
"tool": "set-user-preferences",
"arguments": {
"defaultModel": "gpt-4o",
"costPreference": "cheaper",
"tagPreferences": {
"coding": "deepseek-r1",
"general": "gpt-4o",
"business": "claude-3.5-sonnet-20241022",
"reasoning": "deepseek-r1",
"math": "deepseek-r1",
"creative": "gpt-4o"
}
}
}
Get Prompt History
View your prompt logs:
{
"tool": "get-prompt-history",
"arguments": {
"provider": "chatgpt",
"limit": 10
}
}
Model Tags
Models are tagged by their strengths:
- coding:
deepseek-r1,deepseek-coder,gpt-4o,claude-3.5-sonnet-20241022 - business:
claude-3-opus-20240229,gpt-4o,gemini-1.5-pro - reasoning:
deepseek-r1,o1-preview,claude-3.5-sonnet-20241022 - math:
deepseek-r1,o1-preview,o1-mini - creative:
gpt-4o,claude-3-opus-20240229,gemini-1.5-pro - general:
gpt-4o-mini,claude-3-haiku-20240307,gemini-1.5-flash
Use Cases
- Multi-Perspective Analysis – Get different perspectives from multiple LLMs
- Model Comparison – Compare responses to understand strengths and weaknesses
- Cost Optimization – Choose the most cost-effective model for each task
- Quality Assurance – Cross-reference responses from multiple models
- Intelligent Selection – Automatically use the best model for coding, business, reasoning, etc.
- Prompt Analytics – Track usage, costs, and patterns with automatic logging
Technical Details
Built with: Node.js, TypeScript, MCP SDK
Dependencies: @modelcontextprotocol/sdk, superagent, zod
Platforms: macOS, Windows, Linux
Preference Storage:
- Unix/macOS:
~/.cross-llm-mcp/preferences.json - Windows:
%APPDATA%/cross-llm-mcp/preferences.json
Prompt Log Storage:
- Unix/macOS:
~/.cross-llm-mcp/prompts.json - Windows:
%APPDATA%/cross-llm-mcp/prompts.json
Contributing
⭐ If this project helps you, please star it on GitHub! ⭐
Contributions welcome! Please open an issue or submit a pull request.
License
MIT License – see LICENSE.md for details.
Support
If you find this project useful, consider supporting it:
⚡ Lightning Network
lnbc1pjhhsqepp5mjgwnvg0z53shm22hfe9us289lnaqkwv8rn2s0rtekg5vvj56xnqdqqcqzzsxqyz5vqsp5gu6vh9hyp94c7t3tkpqrp2r059t4vrw7ps78a4n0a2u52678c7yq9qyyssq7zcferywka50wcy75skjfrdrk930cuyx24rg55cwfuzxs49rc9c53mpz6zug5y2544pt8y9jflnq0ltlha26ed846jh0y7n4gm8jd3qqaautqa
₿ Bitcoin: bc1ptzvr93pn959xq4et6sqzpfnkk2args22ewv5u2th4ps7hshfaqrshe0xtp
Ξ Ethereum/EVM: 0x42ea529282DDE0AA87B42d9E83316eb23FE62c3f
相关服务器
Kone.vc
赞助Monetize your AI agent with contextual product recommendations
Humanizer PRO
Humanizer PRO is an MCP server that transforms AI-generated text into natural, human-sounding content. It provides 4 tools: - humanize_text: Rewrite AI text to bypass detectors like GPTZero, Turnitin, Originality.ai, Copyleaks, and ZeroGPT. Three modes: Stealth (highest bypass rate), Academic (Turnitin-optimized), SEO (marketing content). - scan_ai_detection: Analyze text for AI patterns. Returns AI probability score, human-likeness percentage, and verdict. - check_word_balance: Check remaining word credits and subscription plan details. - get_subscription_plans: Browse plans - Free (500 words), Starter ($9.99/mo, 30K words), Creator ($14.99/mo, 100K words), Pro Annual ($119.88/yr, 100K words/mo). Authentication: OAuth 2.0. Works with ChatGPT, Claude, Cursor, and all MCP-compatible clients.
Human Pages
Gives AI agents access to real-world people who listed themselves to be hired by agents. 31 tools including search by skill/location/equipment, job offers, job board listings, in-job messaging, and streaming payments. Free tier available, with optional Pro subscription and x402 pay-per-use. Payments default to crypto (USDC) but are flexible.
GoPluto AI MCP
MCP for quick human experts
Open Agreements
Fill standard legal agreement templates (NDAs, SAFEs, NVCA docs, employment contracts) as DOCX files. Remote MCP server — no install required. MIT licensed.
survey-mcp-server
Conversational surveys
writefreely-mcp-server
MCP server that enables AI agents to publish and manage content on Write.as and self-hosted WriteFreely instances.
Anylist MCP
MCP Server for connecting to Anylist
Jane
A knowledge management server for stdlib and specs documents, with a configurable storage path.
YNAB MCP Server
Integrate AI assistants with your You Need A Budget (YNAB) account for budget automation and analysis.
Skolverket-MCP
MCP server for Swedish National Agency for Education (Skolverket) open data.