An MCP server that integrates Gemini 2.5 Pro and OpenAI models for software development tasks, allowing the use of your entire codebase as context.
A Model Context Protocol (MCP) server that provides functionality to create detailed workplans to implement a task or feature. These workplans are generated with a large, powerful model (such as gemini 2.5 pro or even the o3 deep research API), insert your entire codebase into the context window by default, and can also access URL context and do web search depending on the model used. This pattern of creating workplans using a powerful reasoning model is highly useful for defining work to be done by code assistants like Claude Code or other MCP compatible coding agents, as well as providing a reference to reviewing the output of such coding models and ensure they meet the exactly specified original requirements.
.yellhornignore
files to exclude specific files and directories from the AI context, similar to .gitignore
# Install from PyPI
pip install yellhorn-mcp
# Install from source
git clone https://github.com/msnidal/yellhorn-mcp.git
cd yellhorn-mcp
pip install -e .
The server requires the following environment variables:
GEMINI_API_KEY
: Your Gemini API key (required for Gemini models)OPENAI_API_KEY
: Your OpenAI API key (required for OpenAI models)REPO_PATH
: Path to your repository (defaults to current directory)YELLHORN_MCP_MODEL
: Model to use (defaults to "gemini-2.5-pro"). Available options:
web_search_preview
and code_interpreter
tools for enhanced research capabilitiesYELLHORN_MCP_SEARCH
: Enable/disable Google Search Grounding (defaults to "on" for Gemini models). Options:
The server also requires the GitHub CLI (gh
) to be installed and authenticated.
To configure Yellhorn MCP in VSCode or Cursor, create a .vscode/mcp.json
file at the root of your workspace with the following content:
{
"inputs": [
{
"type": "promptString",
"id": "gemini-api-key",
"description": "Gemini API Key"
}
],
"servers": {
"yellhorn-mcp": {
"type": "stdio",
"command": "/Users/msnidal/.pyenv/shims/yellhorn-mcp",
"args": [],
"env": {
"GEMINI_API_KEY": "${input:gemini-api-key}",
"REPO_PATH": "${workspaceFolder}"
}
}
}
}
To configure Yellhorn MCP with Claude Code directly, add a root-level .mcp.json
file in your project with the following content:
{
"mcpServers": {
"yellhorn-mcp": {
"type": "stdio",
"command": "yellhorn-mcp",
"args": ["--model", "o3"],
"env": {
"YELLHORN_MCP_SEARCH": "on"
}
}
}
}
Analyzes the codebase and creates a .yellhorncontext
file listing directories to be included in AI context. This tool helps optimize AI context by understanding the task you want to accomplish and creating a whitelist of relevant directories, significantly reducing token usage and improving AI focus on relevant code.
Input:
user_task
: Description of the task you want to accomplishcodebase_reasoning
: (optional) Control the level of codebase analysis:
"file_structure"
: (default) Basic file structure analysis (fastest)"lsp"
: Function signatures and docstrings only (lighter weight)"full"
: Complete file contents (most comprehensive)"none"
: No codebase contextignore_file_path
: (optional) Path to ignore file (defaults to .yellhornignore
)output_path
: (optional) Output path for context file (defaults to .yellhorncontext
)depth_limit
: (optional) Maximum directory depth to analyze (0 = no limit)disable_search_grounding
: (optional) If set to true
, disables Google Search Grounding for this requestOutput:
context_file_path
: Path to the created .yellhorncontext
filedirectories_included
: Number of directories included in the contextfiles_analyzed
: Number of files analyzed during curationThe .yellhorncontext
file acts as a whitelist - only files matching the patterns will be included in subsequent workplan/judgement calls. This significantly reduces token usage and improves AI focus on relevant code.
Example .yellhorncontext
output:
src/api/
src/models/
tests/api/
*.config.js
Creates a GitHub issue with a detailed workplan based on the title and detailed description.
Input:
title
: Title for the GitHub issue (will be used as issue title and header)detailed_description
: Detailed description for the workplan. Any URLs provided here will be extracted and included in a References section.codebase_reasoning
: (optional) Control whether AI enhancement is performed:
"full"
: (default) Use AI to enhance the workplan with full codebase context"lsp"
: Use AI with lightweight codebase context (function/method signatures, class attributes and struct fields for Python and Go)"none"
: Skip AI enhancement, use the provided description as-isdebug
: (optional) If set to true
, adds a comment to the issue with the full prompt used for generationdisable_search_grounding
: (optional) If set to true
, disables Google Search Grounding for this requestOutput:
issue_url
: URL to the created GitHub issueissue_number
: The GitHub issue numberRetrieves the workplan content (GitHub issue body) associated with a workplan.
Input:
issue_number
: The GitHub issue number for the workplan.disable_search_grounding
: (optional) If set to true
, disables Google Search Grounding for this requestOutput:
Updates an existing workplan based on revision instructions. The tool fetches the current workplan from the specified GitHub issue and uses AI to revise it according to your instructions.
Input:
issue_number
: The GitHub issue number containing the workplan to reviserevision_instructions
: Instructions describing how to revise the workplancodebase_reasoning
: (optional) Control whether AI enhancement is performed:
"full"
: (default) Use AI to revise with full codebase context"lsp"
: Use AI with lightweight codebase context (function/method signatures only)"file_structure"
: Use AI with directory structure only (fastest)"none"
: Minimal codebase contextdebug
: (optional) If set to true
, adds a comment to the issue with the full prompt used for generationdisable_search_grounding
: (optional) If set to true
, disables Google Search Grounding for this requestOutput:
issue_url
: URL to the updated GitHub issueissue_number
: The GitHub issue numberTriggers an asynchronous code judgement comparing two git refs (branches or commits) against a workplan described in a GitHub issue. Creates a placeholder GitHub sub-issue immediately and then processes the AI judgement asynchronously, updating the sub-issue with results.
Input:
issue_number
: The GitHub issue number for the workplan.base_ref
: Base Git ref (commit SHA, branch name, tag) for comparison. Defaults to 'main'.head_ref
: Head Git ref (commit SHA, branch name, tag) for comparison. Defaults to 'HEAD'.codebase_reasoning
: (optional) Control which codebase context is provided:
"full"
: (default) Use full codebase context"lsp"
: Use lighter codebase context (only function signatures for Python and Go, plus full diff files)"file_structure"
: Use only directory structure without file contents for faster processing"none"
: Skip codebase context completely for fastest processingdebug
: (optional) If set to true
, adds a comment to the sub-issue with the full prompt used for generationdisable_search_grounding
: (optional) If set to true
, disables Google Search Grounding for this requestAny URLs mentioned in the workplan will be extracted and preserved in a References section in the judgement.
Output:
message
: Confirmation that the judgement task has been initiatedsubissue_url
: URL to the created placeholder sub-issue where results will be postedsubissue_number
: The GitHub issue number of the placeholder sub-issueYellhorn MCP also implements the standard MCP resource API to provide access to workplans:
list-resources
: Lists all workplans (GitHub issues with the yellhorn-mcp label)get-resource
: Retrieves the content of a specific workplan by issue numberThese can be accessed via the standard MCP CLI commands:
# List all workplans
mcp list-resources yellhorn-mcp
# Get a specific workplan by issue number
mcp get-resource yellhorn-mcp 123
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run tests with coverage report
pytest --cov=yellhorn_mcp --cov-report term-missing
The project uses GitHub Actions for continuous integration and deployment:
Testing: Runs automatically on pull requests and pushes to the main branch
Publishing: Automatically publishes to PyPI when a version tag is pushed
To release a new version:
git commit -am "Bump version to X.Y.Z"
git tag vX.Y.Z
git push && git push --tags
For a history of changes, see the Changelog.
For more detailed instructions, see the Usage Guide.
MIT
A gateway server that intelligently routes MCP requests to multiple backend servers based on external configuration.
MCP server empowers LLMs to interact with JSON files efficiently. With JSON MCP, you can split, merge, etc.
Interact with the Postman API via an MCP server. Requires a Postman API key.
Extracts text and performs OCR on various documents like IDs and invoices, with support for Markdown conversion.
Create mock MCP servers instantly for developing and testing agentic AI workflows.
Provides API documentation from Apifox projects as a data source for AI programming tools that support MCP.
Create, edit, and manage LaTeX files. Requires an external LaTeX distribution like MiKTeX, TeX Live, or MacTeX.
Create and read feature flags, review experiments, generate flag types, search docs, and interact with GrowthBook's feature flagging and experimentation platform.
A Swift-based MCP server that integrates with Xcode to enhance AI development workflows.
MCP server to provide Jira Tickets information to AI coding agents like Cursor.