Analyze large codebases and document collections using high-context models via OpenRouter, OpenAI, or Google AI -- very useful, e.g., with Claude Code
Consult7 is a Model Context Protocol (MCP) server that enables AI agents to consult large context window models for analyzing extensive file collections - entire codebases, document repositories, or mixed content that exceed the current agent's context limits. Supports providers Openrouter, OpenAI, and Google.
When working with AI agents that have limited context windows (like Claude with 200K tokens), Consult7 allows them to leverage models with massive context windows to analyze large codebases or document collections that would otherwise be impossible to process in a single query.
"For Claude Code users, Consult7 is a game changer."
Consult7 recursively collects all files from a given path that match your regex pattern (including all subdirectories), assembles them into a single context, and sends them to a large context window model along with your query. The result of this query is directly fed back to the agent you are working with.
".*\.py$"
(all Python files)/Users/john/my-python-project
".*\.(py|js|ts)$"
(Python, JavaScript, TypeScript files)/Users/john/backend
".*test.*\.py$|.*_test\.py$"
(test files)/Users/john/project
".*\.(py|js|ts)$"
"gemini-2.5-flash|thinking"
/Users/john/webapp
Simply run:
# OpenRouter
claude mcp add -s user consult7 uvx -- consult7 openrouter your-api-key
# Google AI
claude mcp add -s user consult7 uvx -- consult7 google your-api-key
# OpenAI
claude mcp add -s user consult7 uvx -- consult7 openai your-api-key
Add to your Claude Desktop configuration file:
{
"mcpServers": {
"consult7": {
"type": "stdio",
"command": "uvx",
"args": ["consult7", "openrouter", "your-api-key"]
}
}
}
Replace openrouter
with your provider choice (google
or openai
) and your-api-key
with your actual API key.
No installation required - uvx
automatically downloads and runs consult7 in an isolated environment.
uvx consult7 <provider> <api-key> [--test]
<provider>
: Required. Choose from openrouter
, google
, or openai
<api-key>
: Required. Your API key for the chosen provider--test
: Optional. Test the API connectionThe model is specified when calling the tool, not at startup. The server shows example models for your provider on startup.
Standard models:
"gemini-2.5-flash"
- Fast model"gemini-2.5-pro"
- Intelligent model"gemini-2.0-flash-exp"
- Experimental modelWith thinking mode (add |thinking
suffix):
"gemini-2.5-flash|thinking"
- Fast with deep reasoning"gemini-2.5-pro|thinking"
- Intelligent with deep reasoningStandard models:
"google/gemini-2.5-pro"
- Intelligent, 1M context"google/gemini-2.5-flash"
- Fast, 1M context"anthropic/claude-sonnet-4"
- Claude Sonnet, 200k context"openai/gpt-4.1"
- GPT-4.1, 1M+ contextWith reasoning mode (add |thinking
suffix):
"anthropic/claude-sonnet-4|thinking"
- Claude with 31,999 reasoning tokens"openai/gpt-4.1|thinking"
- GPT-4.1 with reasoning effort=highStandard models (include context length):
"gpt-4.1-2025-04-14|1047576"
- 1M+ context, very fast"gpt-4.1-nano-2025-04-14|1047576"
- 1M+ context, ultra fast"o3-2025-04-16|200k"
- Advanced reasoning model"o4-mini-2025-04-16|200k"
- Fast reasoning modelO-series models with |thinking marker:
"o1-mini|128k|thinking"
- Mini reasoning with |thinking marker"o3-2025-04-16|200k|thinking"
- Advanced reasoning with |thinking markerNote: For OpenAI, |thinking is only supported on o-series models and serves as an informational marker. The models use reasoning tokens automatically.
Advanced: You can specify custom thinking tokens with |thinking=30000
but this is rarely needed.
# Test OpenRouter
uvx consult7 openrouter sk-or-v1-... --test
# Test Google AI
uvx consult7 google AIza... --test
# Test OpenAI
uvx consult7 openai sk-proj-... --test
To remove consult7 from Claude Code (or before reinstalling):
claude mcp remove consult7 -s user
Create crafted UI components inspired by the best 21st.dev design engineers.
Connect to any function, any language, across network boundaries using AgentRPC.
ALAPI MCP Tools,Call hundreds of API interfaces via MCP
APIMatic MCP Server is used to validate OpenAPI specifications using APIMatic. The server processes OpenAPI files and returns validation summaries by leveraging APIMatic’s API.
Enable AI agents to interact with the Atla API for state-of-the-art LLMJ evaluation.
Get prescriptive CDK advice, explain CDK Nag rules, check suppressions, generate Bedrock Agent schemas, and discover AWS Solutions Constructs patterns.
Bring the full power of BrowserStack’s Test Platform to your AI tools, making testing faster and easier for every developer and tester on your team.
Flag features, manage company data, and control feature access using Bucket.
Manage Buildkite pipelines and builds.
A Model Context Protocol server for generating visual charts using AntV.