Second Opinion
Review commits and codebases using external LLMs like OpenAI, Google Gemini, and Mistral.
Second Opinion 🔍
An MCP (Model Context Protocol) server that assists Claude Code in reviewing commits and code bases. This tool leverages external LLMs (OpenAI, Google Gemini, Ollama, Mistral) to provide intelligent code review capabilities, git diff analysis, commit quality assessment, and uncommitted work analysis.
Features
- Git Diff Analysis: Analyze git diff output to understand code changes using LLMs
- Code Review: Review code for quality, security, and best practices with AI assistance
- Commit Analysis: Analyze git commits for quality and adherence to best practices
- Uncommitted Work Analysis: Analyze all uncommitted changes or just staged changes
- Repository Information: Get information about git repositories
- Multiple LLM Support: Works with OpenAI, Google Gemini, Ollama (local), and Mistral AI
- 🚀 Smart Optimization: Dynamic token allocation and task-specific temperature tuning
- ⚡ Performance Tuning: Provider-specific optimizations and memory-aware chunking
- Security: Input validation, secure path handling, and API key protection
- Memory Safety: Configurable memory limits and streaming support for large diffs
Installation
Prerequisites
- Go 1.20 or higher
- Git
- Claude Code Desktop app
Build from Source
- Clone the repository:
git clone https://github.com/dshills/second-opinion.git
cd second-opinion
- Install dependencies:
go mod tidy
- Build the server:
go build -o bin/second-opinion
Configuration
Second Opinion supports two configuration methods, with the following priority order:
- JSON Configuration File (preferred):
~/.second-opinion.jsonin your home directory - Environment Variables: Using
.envfile or system environment variables
JSON Configuration (Recommended)
Create a .second-opinion.json file in your home directory:
{
"default_provider": "openai",
"temperature": 0.3,
"max_tokens": 4096,
"server_name": "Second Opinion 🔍",
"server_version": "1.0.0",
"openai": {
"api_key": "sk-your-openai-api-key",
"model": "gpt-5-mini"
},
"google": {
"api_key": "your-google-api-key",
"model": "gemini-2.0-flash-exp"
},
"ollama": {
"endpoint": "http://localhost:11434",
"model": "devstral:latest"
},
"mistral": {
"api_key": "your-mistral-api-key",
"model": "mistral-small-latest"
},
"memory": {
"max_diff_size_mb": 10,
"max_file_count": 1000,
"max_line_length": 1000,
"enable_streaming": true,
"chunk_size_mb": 1
}
}
🚀 Smart Optimization Features:
- Dynamic Token Allocation: Automatically adjusts tokens (4096-32768) based on diff size
- Task-Specific Temperature: Optimizes temperature (0.1-0.3) based on analysis type
- Provider Optimization: Custom parameters for each LLM provider
- Memory Management: Automatic chunking for large diffs and high file counts
Environment Variables Configuration
If no JSON configuration is found, the server falls back to environment variables:
- Copy the example environment file:
cp .env.example .env
- Edit
.envand configure your LLM providers:
# Set your default provider
DEFAULT_PROVIDER=openai # or google, ollama, mistral
# Configure each provider with its own API key and preferred model
OPENAI_API_KEY=sk-your-openai-api-key
OPENAI_MODEL=gpt-5-mini # or gpt-5, gpt-5-nano, gpt-5-chat-latest, gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo
GOOGLE_API_KEY=your-google-api-key
GOOGLE_MODEL=gemini-2.0-flash-exp # or gemini-1.5-flash, gemini-1.5-pro
OLLAMA_ENDPOINT=http://localhost:11434
OLLAMA_MODEL=devstral:latest # or llama3.2, codellama, mistral, etc.
MISTRAL_API_KEY=your-mistral-api-key
MISTRAL_MODEL=mistral-small-latest # or mistral-large-latest, codestral-latest
# Global settings apply to all providers
LLM_TEMPERATURE=0.3 # Controls randomness (0.0-2.0, default: 0.3)
LLM_MAX_TOKENS=4096 # Maximum response length (default: 4096)
Setting up with Claude Code
1. Locate Claude Code Configuration
The configuration file location depends on your operating system:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
2. Edit Configuration
Open the configuration file and add the Second Opinion server:
Option 1: Using JSON Configuration (Recommended)
{
"mcpServers": {
"second-opinion": {
"command": "/path/to/second-opinion/bin/second-opinion"
}
}
}
Replace /path/to/second-opinion with the actual path where you cloned the repository.
Option 2: Using Environment Variables
{
"mcpServers": {
"second-opinion": {
"command": "/path/to/second-opinion/bin/second-opinion",
"env": {
"DEFAULT_PROVIDER": "openai",
"OPENAI_API_KEY": "your-openai-api-key",
"OPENAI_MODEL": "gpt-5-mini",
"LLM_TEMPERATURE": "0.3",
"LLM_MAX_TOKENS": "4096"
}
}
}
}
3. Restart Claude Code
After saving the configuration, restart Claude Code for the changes to take effect.
4. Verify Installation
In Claude Code, you should see "second-opinion" in the MCP servers list. You can test it by asking:
"What git repository information can you get from the current directory?"
Available Tools
1. analyze_git_diff 🚀 Optimized
Analyzes git diff output to understand code changes using the configured LLM with automatic optimization.
Parameters:
diff_content(required): Git diff output to analyzesummarize(optional): Whether to provide a summary of changesprovider(optional): LLM provider to use (overrides default)model(optional): Model to use (overrides provider default)
Smart Optimizations:
- Dynamic Token Allocation: 4096-32768 tokens based on diff size
- Temperature Tuning: 0.25 optimized for diff analysis
- Chunking: Automatic chunking for large diffs (>10MB or >1000 files)
- Provider-Specific: Custom parameters per LLM provider
Example in Claude Code:
"Analyze this git diff and tell me what changed: [paste diff here]"
2. review_code 🚀 Optimized
Reviews code for quality, security, and best practices using the configured LLM with task-specific optimization.
Parameters:
code(required): Code to reviewlanguage(optional): Programming language of the codefocus(optional): Specific focus area -security,performance,style, orallprovider(optional): LLM provider to use (overrides default)model(optional): Model to use (overrides provider default)
Smart Optimizations:
- Task-Specific Temperature: 0.1 for security focus (high precision), 0.2 for general code review
- Dynamic Token Allocation: Scales with code size for comprehensive analysis
- Focus-Aware Analysis: Specialized prompts and parameters per focus area
Example in Claude Code:
"Review this Python code for security issues: [paste code here]"
3. analyze_commit 🚀 Optimized
Analyzes a git commit for quality and adherence to best practices using the configured LLM with commit-specific optimization.
Parameters:
commit_sha(optional): Git commit SHA to analyze (default: HEAD)repo_path(optional): Path to the git repository (default: current directory)provider(optional): LLM provider to use (overrides default)model(optional): Model to use (overrides provider default)
Smart Optimizations:
- Commit Analysis Temperature: 0.2 for consistent, deterministic commit analysis
- Memory-Safe Diff Processing: Handles large commits with automatic truncation
- Combined Analysis: Includes commit message quality, diff analysis, and best practices
Example in Claude Code:
"Analyze the latest commit in this repository"
"Analyze commit abc123 and tell me if it follows best practices"
4. analyze_uncommitted_work 🚀 Optimized
Analyzes uncommitted changes in a git repository to help prepare for commits with intelligent optimization.
Parameters:
repo_path(optional): Path to the git repository (default: current directory)staged_only(optional): Analyze only staged changes (default: false, analyzes all uncommitted changes)provider(optional): LLM provider to use (overrides default)model(optional): Model to use (overrides provider default)
Smart Optimizations:
- Code Review Temperature: 0.2 for balanced analysis of uncommitted changes
- Large Changeset Handling: Automatic chunking for extensive modifications
- Context-Aware Analysis: Tailored analysis for staged vs. all uncommitted work
LLM Analysis Includes:
- Summary of all changes (files modified, added, deleted)
- Type and nature of changes (feature, bugfix, refactor, etc.)
- Completeness and readiness for commit
- Potential issues or concerns
- Suggested commit message(s) if changes are ready
- Recommendations for organizing commits if changes should be split
Example in Claude Code:
"Analyze my uncommitted changes and suggest a commit message"
"Review only my staged changes before I commit"
"Should I split my current changes into multiple commits?"
5. get_repo_info
Gets information about a git repository (no LLM analysis).
Parameters:
repo_path(optional): Path to the git repository (default: current directory)
Example in Claude Code:
"Show me information about this git repository"
Security Features
- Input Validation: All repository paths and commit SHAs are validated to prevent command injection
- Path Restrictions: Repository paths must be within the current working directory
- API Key Protection: API keys are never exposed in error messages or logs
- HTTP Timeouts: All LLM API calls have 30-second timeouts to prevent hanging
- Concurrent Access: Thread-safe provider management for concurrent requests
Optimization System 🚀
Second Opinion includes a comprehensive optimization system that automatically tunes performance based on content and context:
Dynamic Token Allocation
- 4096 tokens: Very small diffs (<5KB)
- 6144 tokens: Small diffs (5-20KB)
- 8192 tokens: Medium diffs (20-50KB)
- 12288 tokens: Large diffs (50-150KB)
- 16384 tokens: Very large diffs (150-500KB)
- 32768 tokens: Huge diffs (>500KB)
Task-Specific Temperature Settings
- 0.1: Security reviews (maximum precision)
- 0.2: Code reviews and commit analysis (mostly deterministic)
- 0.25: Diff analysis (slightly flexible)
- 0.3: Architecture reviews (allows creativity)
Provider-Specific Optimizations
- OpenAI: Full token allocation with top_p=0.9
- Google: Capped at 8192 tokens with focused sampling (top_k=20, top_p=0.8)
- Mistral: Conservative allocation with top_p=0.8
- Ollama: Local model optimization with repeat_penalty=1.05
Memory Management
- Automatic Chunking: Large diffs (>10MB or >1000 files) are intelligently split
- Smart Chunk Sizing: Adapts chunk size based on file count
- Memory-Aware Streaming: Enables streaming for large operations
Development
Project Structure
second-opinion/
├── main.go # MCP server setup and tool registration
├── handlers.go # Tool handler implementations
├── validation.go # Input validation functions
├── config/ # Configuration loading and optimization
│ ├── config.go # Main configuration with optimization methods
│ └── optimization_test.go # Comprehensive optimization tests
├── llm/ # LLM provider implementations
│ ├── provider.go # Provider interface, prompts, and optimization wrapper
│ ├── openai.go # OpenAI implementation
│ ├── google.go # Google Gemini implementation
│ ├── ollama.go # Ollama implementation with advanced options
│ └── mistral.go # Mistral implementation with additional parameters
├── CLAUDE.md # Claude Code specific instructions
└── TODO.md # Development roadmap
Running Tests
# Run all tests
go test ./... -v
# Run optimization tests specifically
go test ./config -v
# Run specific test suites
go test ./llm -v -run TestProviderConnections
# Run with race detection
go test -race ./...
# Run with coverage
go test -cover ./...
Linting
# Install golangci-lint if not already installed
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
# Run linter
golangci-lint run
# Auto-fix issues where possible
golangci-lint run --fix
Building
# Build for current platform
go build -o bin/second-opinion
# Build with race detector (for development)
go build -race -o bin/second-opinion
# Build for different platforms
GOOS=darwin GOARCH=amd64 go build -o bin/second-opinion-darwin-amd64
GOOS=linux GOARCH=amd64 go build -o bin/second-opinion-linux-amd64
GOOS=windows GOARCH=amd64 go build -o bin/second-opinion-windows-amd64.exe
Troubleshooting
Common Issues
-
"Provider not configured" error
- Ensure you have set up either
~/.second-opinion.jsonor environment variables - Check that API keys are valid and have appropriate permissions
- Ensure you have set up either
-
"Not a git repository" error
- Ensure you're running the tool in a directory with a
.gitfolder - The tool validates that paths are git repositories for security
- Ensure you're running the tool in a directory with a
-
Timeout errors
- Check your internet connection
- For Ollama, ensure the local server is running:
ollama serve - Consider using a faster model if timeouts persist
-
Permission denied errors
- The tool only allows access to the current working directory and subdirectories
- Ensure the binary has execute permissions:
chmod +x bin/second-opinion
Debug Mode
To see detailed logs, you can run the server directly:
./bin/second-opinion 2>debug.log
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Ensure all tests pass and linting is clean
- Submit a pull request
See TODO.md for planned features and known issues.
Memory Usage
For large repositories, see docs/MEMORY_USAGE.md for configuration options to handle large diffs efficiently.
License
MIT
相關伺服器
Alpha Vantage MCP Server
贊助Access financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Credos
Share your team's Coding Best Practices with Cursor, VS Code, Claude code, Windsurf, JetBrains IDEs and other coding tools supporting remote MCP connection.
Lightning Tools MCP Server
An MCP server for accessing useful Bitcoin Lightning tools.
Contendeo
Give your agent eyes — multimodal video analysis MCP.
MCP Dev Utils
A modular and extensible MCP server with essential utilities for developers.
BCMS MCP
Give me a one - two sentence description of the BCMS MCP # MCP The BCMS Model Context Protocol (MCP) integration enables AI assistants like Claude, Cursor, and other MCP-compatible tools to interact directly with your BCMS content. This allows you to create, read, and update content entries, manage media files, and explore your content structure—all through natural language conversations with AI. ## What is MCP? The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is an open standard developed by Anthropic that allows AI applications to securely connect to external data sources and tools. With BCMS MCP support, you can leverage AI assistants to: - Query and explore your content structure - Create new content entries with AI-generated content - Update existing entries - Manage your media library - Get intelligent suggestions based on your content model --- ## Getting Started ### Prerequisites 1. A BCMS account with an active instance 2. An MCP key with appropriate permissions 3. An MCP-compatible client (Claude Desktop, Cursor, or any MCP client) ### Step 1: Create an MCP Key 1. Navigate to your BCMS dashboard 2. Go to Settings → MCP 3. Click Create MCP Key 4. Configure the permissions for templates you want the AI to access:GET: Read entries 5. POST: Create entries 6. PUT: Update entries 7. DELETE: Delete entries Note: Right now, MCP only supports creating, reading and updating content. ### Step 2: Configure Your MCP Client You can find full instructions for integrating BCMS with your AI tools right inside BCMS, on the MCP page. But in general, installing BCMS MCP works in a standard way: ``` { "mcpServers": { "bcms": { "url": "https://app.thebcms.com/api/v3/mcp?mcpKey=YOUR_MCP_KEY" } } } ``` ## Available Tools Once connected, your AI assistant will have access to the following tools based on your MCP key permissions: ### Content Discovery #### list_templates_and_entries Lists all templates and their entries that you have access to. This is typically the first tool to call when exploring your BCMS content. Returns: - Template IDs, names, and slugs - Entry IDs with titles and slugs for each language Example prompt: "Show me all the templates and entries in my BCMS" --- ### Entry Management #### list_entries_for_{templateId} Retrieves all entries for a specific template with full content data. A separate tool is generated for each template you have access to. Returns: - Complete entry data including all meta fields - Content in all configured languages - Entry statuses Example prompt: "List all blog posts from my Blog template" --- #### create_entry_for_{templateId} Creates a new entry for a specific template. The input schema is dynamically generated based on your template's field structure. Input: - statuses: Array of status assignments per language - meta: Array of metadata for each language (title, slug, custom fields) - content: Array of content nodes for each language Example prompt: "Create a new blog post titled 'Getting Started with BCMS' with a brief introduction paragraph" --- #### update_entry_for_{templateId} Updates an existing entry for a specific language. Input: - entryId: The ID of the entry to update - lng: Language code (e.g., "en") - status: Optional status ID - meta: Updated metadata - content: Updated content nodes Example prompt: "Update the introduction paragraph of my 'Getting Started' blog post" --- ### Media Management #### list_all_media Lists all media files in your media library. Returns: - Media IDs, names, and types - File metadata (size, dimensions for images) - Parent directory information Example prompt: "Show me all images in my media library" --- #### list_media_dirs Lists the directory structure of your media library. Returns: - Hierarchical directory structure - Directory IDs and names Example prompt: "Show me the folder structure of my media library" --- #### create-media-directory Creates a new directory in your media library. Input: - name: Name of the directory - parentId: Optional parent directory ID (root if not specified) Example prompt: "Create a new folder called 'Blog Images' in my media library" --- #### request-upload-media-url Returns a URL you use to upload a file (for example via POST with multipart form data), which avoids pushing large binaries through the MCP tool payload. You still need a valid file name and MIME type when uploading, as described in the tool response. Availability: Only when the MCP key has Can mutate media enabled. Example prompt: “Give me an upload URL for a new hero image, then tell me how to upload it.” Input: - fileName: Name of the file with extension - fileData: Base64-encoded file data (with data URI prefix) - parentId: Optional parent directory ID Example prompt: "Upload this image to my Blog Images folder" --- ### Linking Tools #### get_entry_pointer_link Generates an internal BCMS link to an entry for use in content. Input: - entryId: The ID of the entry to link to Returns: - Internal link format: entry:{entryId}@*_{templateId}:entry Example prompt: "Get me the internal link for the 'About Us' page entry" --- #### get_media_pointer_link Generates an internal BCMS link to a media item for use in content. Input: - mediaId: The ID of the media item Returns: - Internal link format: media:{mediaId}@*_@*_:entry Example prompt: "Get the link for the hero image so I can use it in my blog post" --- ## Content Structure ### Entry Content Nodes When creating or updating entries, content is structured as an array of nodes. Supported node types include: Type Description paragraph Standard text paragraph heading Heading (h1-h6) bulletList Unordered list orderedList Numbered list listItem List item codeBlock Code block with syntax highlighting blockquote Quote block image Image node widget Custom widget with props ### Example Content Structure ``` { "content": [ { "lng": "en", "nodes": [ { "type": "heading", "attrs": { "level": 1 }, "content": [ { "type": "text", "text": "Welcome to BCMS" } ] }, { "type": "paragraph", "content": [ { "type": "text", "text": "This is your first paragraph." } ] } ] } ] } ``` ## Security & Permissions ### MCP Key Scopes Your MCP key controls what the AI can access: - Template Access: Only templates explicitly granted in the MCP key are visible - Operation Permissions: Each template can have independent GET/POST/PUT/DELETE permissions - Media Access: Media operations are controlled separately ### Best Practices 1. Principle of Least Privilege: Only grant the permissions needed for your use case 2. Separate Keys: Create different MCP keys for different purposes or team members 3. Regular Rotation: Periodically rotate your MCP keys ## Use Cases ### Content Creation Workflows Blog Post Creation "Create a new blog post about the benefits of headless CMS. Include an introduction, three main benefits with explanations, and a conclusion. Use the Blog template." Product Updates "Update the price field for all products in the Electronics category to apply a 10% discount" ### Content Exploration Content Audit "List all blog posts that don't have a featured image set" Translation Status "Show me which entries are missing German translations" ### Media Organization Library Cleanup "Show me all unused images in the media library" Folder Setup "Create folder structure for: Products > Categories > Electronics, Clothing, Home" ## Troubleshooting ### Common Issues #### "MCP key not found" - Verify your MCP key format: keyId.keySecret.instanceId - Ensure the MCP key hasn't been deleted or deactivated - Check that you're using the correct instance #### "MCP key does not have access to template" - Review your MCP key permissions in the dashboard - Ensure the required operation (GET/POST/PUT/DELETE) is enabled for the template #### Session Expired - MCP sessions may timeout after periods of inactivity - Simply start a new conversation to establish a fresh session ### Getting Help - Documentation: [thebcms.com/docs](https://thebcms.com/docs) - Support: [[email protected]](mailto:[email protected]) - Community: [Join BCMS Discord](https://discord.com/invite/SYBY89ccaR) for community support ## Technical Reference ### Endpoint POST https://app.thebcms.com/api/v3/mcp?mcpKey={MCP_KEY} ### Transport BCMS MCP uses the Streamable HTTP transport with session management. Sessions are maintained via the mcp-session-id header. ### Response Format All tools return structured JSON responses conforming to the MCP specification with: - content: Array of content blocks - structuredContent: Typed response data ## Rate Limits MCP requests are subject to the same rate limits as API requests: - Requests are tracked per MCP key - Contact support if you need higher limits for production workloads
Figma
Integrate Figma design data with AI coding tools using a local MCP server.
Laburen MCP Server
A template for deploying a remote, authentication-free MCP server on Cloudflare Workers.
DevCycle
Turn your favourite AI tool into a feature management assistant. DevCycle's MCP works with your favourite coding assistant so you can create and monitor feature flags using natural language right in your workflow.
MCP Hangar
Kubernetes-native registry for managing multiple MCP servers with lazy loading, health monitoring, and RBAC
Kafka MCP
A natural language interface to manage Apache Kafka operations.