Code Council
Your AI Code Review Council - Get diverse perspectives from multiple AI models in parallel.
Code Council
One AI can miss things. A council of AIs catches more.
Code Council runs your code through multiple AI models simultaneously, then shows you where they agree, where they disagree, and what only one model caught.

Example Output
## Consensus Analysis
### Unanimous (All 4 models agree) - High Confidence
**Critical: SQL Injection Vulnerability**
Location: src/api/users.ts:42
The user input is directly interpolated into the SQL query without sanitization.
Use parameterized queries instead.
---
### Majority (3 of 4 models) - Moderate Confidence
**High: Missing Input Validation**
Location: src/api/users.ts:38
The userId parameter is used without validation. Add type checking.
---
### Disagreement - Your Judgment Needed
**Session Token Expiration**
Location: src/api/auth.ts:28
- Kimi K2.5: "Tokens should expire after 24 hours"
- DeepSeek V3.2: "Current 7-day expiration is reasonable for this use case"
- Minimax M2.1: "No issue found"
---
### Single Model Finding - Worth Checking
**Low: Magic Number**
Location: src/utils/pagination.ts:12
Found by: GLM 4.7
The value 20 should be extracted to a named constant.
Why Multiple Models?
Different AI models have different strengths:
- One model might miss a security issue another catches
- Unanimous findings are almost certainly real problems
- Disagreements highlight where you should look closer
- Single-model findings might be noise, or might be the one model that saw something others missed
Think of it as getting 4 senior engineers to review your code at once.
Quick Start
MCP Server (Claude Desktop, Cursor, etc.)
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"code-council": {
"command": "npx",
"args": ["-y", "@klitchevo/code-council"],
"env": {
"OPENROUTER_API_KEY": "your-api-key-here"
}
}
}
}
Get your API key at OpenRouter.
That's it. Ask Claude: "Use review_code to check this function: [paste code]"
CLI (GitHub Actions, CI/CD)
Run reviews directly from command line:
# Review git changes
npx @klitchevo/code-council review git --review-type diff
# Review with inline PR comments format (for GitHub Actions)
npx @klitchevo/code-council review git --review-type diff --format pr-comments
# Review code from stdin
echo "function foo() {}" | npx @klitchevo/code-council review code
# Review with custom models
npx @klitchevo/code-council review git --models "anthropic/claude-sonnet-4,openai/gpt-4o"
# Show help
npx @klitchevo/code-council review --help
More setup options: See Configuration Guide for Cursor, VS Code, custom models, and advanced options.
GitHub Actions
Automatically review PRs with multiple AI models. Findings appear as inline comments on the exact lines of code. Code fixes use GitHub's suggestion syntax for one-click apply. Re-runs automatically clean up old comments.
Quick Setup
Generate the workflow file automatically:
npx @klitchevo/code-council setup workflow
This creates .github/workflows/code-council-review.yml with inline PR comments enabled.
Options:
--simple- Use markdown format instead of inline comments--force- Overwrite existing workflow file
Manual Setup
Or create the workflow manually:
name: Code Council Review
on:
pull_request:
types: [opened, synchronize, ready_for_review, reopened]
jobs:
review:
runs-on: ubuntu-latest
if: github.event.pull_request.draft == false
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Run Code Council Review
env:
OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
run: |
npx @klitchevo/code-council review git \
--review-type diff \
--format pr-comments \
> review.json
- name: Post Review
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh api repos/${{ github.repository }}/pulls/${{ github.event.pull_request.number }}/reviews \
--method POST \
--input review.json
Add OPENROUTER_API_KEY to your repository secrets (Settings > Secrets > Actions).
Use Cases
| Scenario | Tool | What You Get |
|---|---|---|
| About to merge a PR | review_git_changes | Multi-model review of your diff |
| Automated PR reviews | CLI review git | Multi-model review in GitHub Actions |
| Planning a refactor | review_plan | Catch design issues before coding |
| Reviewing React components | review_frontend | Accessibility + performance + UX focus |
| Securing an API endpoint | review_backend | Security + architecture analysis |
| Want deeper discussion | discuss_with_council | Multi-turn conversation with context |
| Audit entire codebase | tps_audit | Flow, waste, bottlenecks analysis |
Full tool reference: See Tools Reference for all parameters and examples.
Reading the Results
Code Council shows confidence levels for each finding:
| Level | Meaning | Action |
|---|---|---|
| Unanimous | All models agree | High confidence - fix this |
| Majority | Most models agree | Likely valid - investigate |
| Disagreement | Models conflict | Your judgment needed |
| Single | One model found this | Worth checking |
Configuration
Code Council works out of the box with sensible defaults. For customization:
- Configuration Guide - MCP client setup, config files, environment variables
- Model Selection - Choose models, pricing, performance tradeoffs
- Tools Reference - Detailed tool parameters and examples
Custom Models Example
{
"env": {
"OPENROUTER_API_KEY": "your-api-key",
"CODE_REVIEW_MODELS": ["anthropic/claude-sonnet-4.5", "openai/gpt-4o"]
}
}
Cost
Default models are chosen for cost-effectiveness (~$0.01-0.05 per review).
Swap in Claude/GPT-4 for higher quality at higher cost (~$0.10-0.30 per review).
See Model Selection Guide for pricing details and optimization tips.
Requirements
- Node.js >= 18.0.0
- OpenRouter API key
- MCP-compatible client (Claude Desktop, Cursor, etc.)
Troubleshooting
"OPENROUTER_API_KEY environment variable is required"
Add the API key to the env section of your MCP client configuration.
Reviews are slow This is expected when using multiple models. Consider using fewer models or faster models like Gemini Flash.
Models returning errors Check your OpenRouter credits and model availability at status.openrouter.ai.
Contributing
Contributions welcome! Please open an issue or PR.
License
MIT
Links
- Documentation - Full docs and examples
- OpenRouter - Multi-model AI API
- Model Context Protocol - MCP specification
- Claude Desktop - MCP-compatible AI assistant
Related Servers
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
FastMCP ThreatIntel
An AI-powered threat intelligence analysis tool for multi-source IOC analysis, APT attribution, and interactive reporting.
Remote MCP Server (Authless)
An example of a remote MCP server deployable on Cloudflare Workers without authentication.
Panther
Interact with the Panther security platform to write detections, query logs with natural language, and manage alerts.
VibeCoding System
A conversation-driven development framework for rapid MVP and POC creation.
Makefile MCP Server
Exposes Makefile targets as callable tools for AI assistants.
Code Runner MCP
Execute JavaScript and Python code in a secure sandbox. Supports Deno for JS/TS and Pyodide for Python, with configurable permissions.
MCP Servers Collection
A collection of MCP servers providing structured interfaces for AI assistants to interact with various development tools and services.
Pickaxe AI Agent MCP
Manage your pickaxe.co AI agents, knowledge bases, users, and analytics directly through natural language.
spm-mcp
iOS Swift Package Manager server written in Swift
PyPI MCP Server
Search and access Python package metadata, version history, and download statistics from the PyPI repository.