consult7
Analyze large codebases and document collections using high-context models via OpenRouter, OpenAI, or Google AI -- very useful, e.g., with Claude Code
Consult7 MCP Server
Consult7 is a Model Context Protocol (MCP) server that enables AI agents to consult large context window models via OpenRouter for analyzing extensive file collections - entire codebases, document repositories, or mixed content that exceed the current agent's context limits.
Why Consult7?
Consult7 enables any MCP-compatible agent to offload file analysis to large context models (up to 2M tokens). Useful when:
- Agent's current context is full
- Task requires specialized model capabilities
- Need to analyze large codebases in a single query
- Want to compare results from different models
"For Claude Code users, Consult7 is a game changer."
How it works
Consult7 collects files from the specific paths you provide (with optional wildcards in filenames), assembles them into a single context, and sends them to a large context window model along with your query. The result is directly fed back to the agent you are working with.
Example Use Cases
Quick codebase summary
- Files:
["/Users/john/project/src/*.py", "/Users/john/project/lib/*.py"] - Query: "Summarize the architecture and main components of this Python project"
- Model:
"google/gemini-3-flash-preview" - Mode:
"fast"
Deep analysis with reasoning
- Files:
["/Users/john/webapp/src/*.py", "/Users/john/webapp/auth/*.py", "/Users/john/webapp/api/*.js"] - Query: "Analyze the authentication flow across this codebase. Think step by step about security vulnerabilities and suggest improvements"
- Model:
"anthropic/claude-sonnet-4.6" - Mode:
"think"
Generate a report saved to file
- Files:
["/Users/john/project/src/*.py", "/Users/john/project/tests/*.py"] - Query: "Generate a comprehensive code review report with architecture analysis, code quality assessment, and improvement recommendations"
- Model:
"google/gemini-2.5-pro" - Mode:
"think" - Output File:
"/Users/john/reports/code_review.md" - Result: Returns
"Result has been saved to /Users/john/reports/code_review.md"instead of flooding the agent's context
Featured: Gemini 3.1 Models
Consult7 supports Google's Gemini 3.1 family:
- Gemini 3.1 Pro (
google/gemini-3.1-pro-preview) - Flagship reasoning model, 1M context - Gemini 3 Flash (
google/gemini-3-flash-preview) - Ultra-fast model, 1M context - Gemini 3.1 Flash Lite (
google/gemini-3.1-flash-lite-preview) - Ultra-fast lite model, 1M context
Quick mnemonics for power users:
gemt= Gemini 3.1 Pro + think (flagship reasoning)gemf= Gemini 3 Flash + fast (ultra fast)gptt= GPT-5.4 + think (latest GPT)grot= Grok 4 + think (alternative reasoning)oput= Claude Opus 4.6 + think (deep reasoning)ULTRA= Run GEMT, GPTT, GROT, and OPUT in parallel (4 frontier models)
These mnemonics make it easy to reference model+mode combinations in your queries.
Installation
Claude Code
Simply run:
claude mcp add -s user consult7 uvx -- consult7 your-openrouter-api-key
Claude Desktop
Add to your Claude Desktop configuration file:
{
"mcpServers": {
"consult7": {
"type": "stdio",
"command": "uvx",
"args": ["consult7", "your-openrouter-api-key"]
}
}
}
Replace your-openrouter-api-key with your actual OpenRouter API key.
No installation required - uvx automatically downloads and runs consult7 in an isolated environment.
Command Line Options
uvx consult7 <api-key> [--test]
<api-key>: Required. Your OpenRouter API key--test: Optional. Test the API connection
The model and mode are specified when calling the tool, not at startup.
Supported Models
Consult7 supports all 500+ models available on OpenRouter. Below are the flagship models with optimized dynamic file size limits:
| Model | Context | Use Case |
|---|---|---|
openai/gpt-5.4 | 1M | Latest GPT, balanced performance |
google/gemini-3.1-pro-preview | 1M | Flagship reasoning model |
google/gemini-3-flash-preview | 1M | Gemini 3 Flash, ultra fast |
google/gemini-3.1-flash-lite-preview | 1M | Ultra-fast lite model |
anthropic/claude-opus-4.6 | 200k | Best quality, deep reasoning |
anthropic/claude-sonnet-4.6 | 200k | Excellent reasoning, fast |
anthropic/claude-haiku-4.5 | 200k | Budget, very fast |
x-ai/grok-4 | 256k | Alternative reasoning model |
x-ai/grok-4.1-fast | 2M | Largest context window |
Quick mnemonics:
gptt=openai/gpt-5.4+think(latest GPT, deep reasoning)gemt=google/gemini-3.1-pro-preview+think(Gemini 3.1 Pro, flagship reasoning)grot=x-ai/grok-4+think(Grok 4, deep reasoning)oput=anthropic/claude-opus-4.6+think(Claude Opus, deep reasoning)opuf=anthropic/claude-opus-4.6+fast(Claude Opus, no reasoning)gemf=google/gemini-3-flash-preview+fast(Gemini 3 Flash, ultra fast)ULTRA= call GEMT, GPTT, GROT, and OPUT IN PARALLEL (4 frontier models for maximum insight)
You can use any OpenRouter model ID (e.g., deepseek/deepseek-r1-0528). See the full model list. File size limits are automatically calculated based on each model's context window.
Performance Modes
fast: No reasoning - quick answers, simple tasksmid: Moderate reasoning - code reviews, bug analysisthink: Maximum reasoning - security audits, complex refactoring
File Specification Rules
- Absolute paths only:
/Users/john/project/src/*.py - Wildcards in filenames only:
/Users/john/project/*.py(not in directory paths) - Extension required with wildcards:
*.pynot* - Mix files and patterns:
["/path/src/*.py", "/path/README.md", "/path/tests/*_test.py"]
Common patterns:
- All Python files:
/path/to/dir/*.py - Test files:
/path/to/tests/*_test.pyor/path/to/tests/test_*.py - Multiple extensions:
["/path/*.js", "/path/*.ts"]
Automatically ignored: __pycache__, .env, secrets.py, .DS_Store, .git, node_modules
Size limits: Dynamic based on model context window (e.g., Grok 4 Fast: ~8MB, GPT-5.4: ~1.5MB)
Tool Parameters
The consultation tool accepts the following parameters:
- files (required): List of absolute file paths or patterns with wildcards in filenames only
- query (required): Your question or instruction for the LLM to process the files
- model (required): The LLM model to use (see Supported Models above)
- mode (required): Performance mode -
fast,mid, orthink - output_file (optional): Absolute path to save the response to a file instead of returning it
- If the file exists, it will be saved with
_updatedsuffix (e.g.,report.md→report_updated.md) - When specified, returns only:
"Result has been saved to /path/to/file" - Useful for generating reports, documentation, or analyses without flooding the agent's context
- If the file exists, it will be saved with
- zdr (optional): Enable Zero Data Retention routing (default:
false)- When
true, routes only to endpoints with ZDR policy (prompts not retained by provider) - ZDR available: Gemini 3.1 Pro/Flash, Claude Opus 4.6, GPT-5
- Not available: GPT-5.4, Grok 4 (returns error)
- When
Usage Examples
Via MCP in Claude Code
Claude Code will automatically use the tool with proper parameters:
{
"files": ["/Users/john/project/src/*.py"],
"query": "Explain the main architecture",
"model": "google/gemini-3-flash-preview",
"mode": "fast"
}
Via Python API
from consult7.consultation import consultation_impl
result = await consultation_impl(
files=["/path/to/file.py"],
query="Explain this code",
model="google/gemini-3-flash-preview",
mode="fast", # fast, mid, or think
provider="openrouter",
api_key="sk-or-v1-..."
)
Testing
# Test OpenRouter connection
uvx consult7 sk-or-v1-your-api-key --test
Uninstalling
To remove consult7 from Claude Code:
claude mcp remove consult7 -s user
Version History
v3.4.0
- Upgraded models: Gemini 3.1 Pro, Claude Opus 4.6, Claude Sonnet 4.6, Grok 4.1 Fast
- Added new models: Claude Haiku 4.5, Gemini 3.1 Flash Lite
- Updated mnemonics:
gemt→ Gemini 3.1 Pro,oput/opuf→ Claude Opus 4.6 - Legacy model IDs still supported
v3.3.0
- Fixed GPT-5.2 thinking mode truncation issue (switched to streaming)
- Added
google/gemini-3-flash-preview(Gemini 3 Flash, ultra fast) - Updated
gemfmnemonic to use Gemini 3 Flash - Added
zdrparameter for Zero Data Retention routing
v3.2.0
- Updated to GPT-5.2 with effort-based reasoning
v3.1.0
- Added
google/gemini-3-pro-preview(1M context, flagship reasoning model) - New mnemonics:
gemt(Gemini 3 Pro),grot(Grok 4),ULTRA(parallel execution)
v3.0.0
- Removed Google and OpenAI direct providers - now OpenRouter only
- Removed
|thinkingsuffix - usemodeparameter instead (now required) - Clean
modeparameter API:fast,mid,think - Simplified CLI from
consult7 <provider> <key>toconsult7 <key> - Better MCP integration with enum validation for modes
- Dynamic file size limits based on model context window
v2.1.0
- Added
output_fileparameter to save responses to files
v2.0.0
- New file list interface with simplified validation
- Reduced file size limits to realistic values
License
MIT
Verwandte Server
Scout Monitoring MCP
SponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
SponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
ECharts MCP Server
A server for generating various types of charts using the ECharts library.
MCP Tools
A collection of MCP servers for growth and analytics, including a server for Google Analytics.
Tuteliq
AI-powered safety tools for detecting grooming, bullying, threats, and harmful interactions in conversations. The server integrates Tuteliq’s behavioral risk detection API via the Model Context Protocol (MCP), enabling AI assistants to analyze interaction patterns rather than relying on keyword moderation. Use cases include platform safety, chat moderation, child protection, and compliance with regulations such as the EU Digital Services Act (DSA), COPPA, and KOSA.
Sistema de Predicción Energética con IA
An AI-powered system for analyzing and predicting domestic energy consumption. It offers precise forecasts, historical pattern analysis, and personalized optimization recommendations through a conversational interface.
Unity MCP Server
An MCP server that allows AI assistants to programmatically interact with Unity development projects.
OAuth 2.1 MCP Server
A Next.js template for building MCP servers with OAuth 2.1 authentication, supporting PostgreSQL and Redis.
MCP Storybook Image Generator
Generate storybook images for children's stories using Google's Gemini AI.
shadow-cljs
Monitors shadow-cljs builds and provides real-time build status updates.
Advanced Gemini MCP Server
An open-source MCP server that integrates with Google's Gemini AI. Requires a Google Gemini API key.
Cursor Talk to Figma MCP
Integrates Cursor AI with Figma to read and programmatically modify designs.