better-code-review-graph
Knowledge graph for token-efficient code reviews with Tree-sitter parsing, dual-mode embedding (ONNX + LiteLLM), and blast-radius analysis via MCP tools.
better-code-review-graph
mcp-name: io.github.n24q02m/better-code-review-graph
Knowledge graph for token-efficient code reviews -- fixed search, configurable embeddings, qualified call resolution.
Fork of code-review-graph with critical bug fixes, configurable embeddings, and production CI/CD. Parses your codebase with Tree-sitter, builds a structural graph of functions/classes/imports, and gives Claude (or any MCP client) precise context so it reads only what matters.
Why Better
| Feature | code-review-graph | better-code-review-graph |
|---|---|---|
| Multi-word search | Broken (literal substring match) | AND-logic word splitting ("firebase auth" matches both verify_firebase_token and FirebaseAuth) |
| callers_of accuracy | Empty results (bare name targets) | Qualified name resolution -- same-file calls resolved to file::name |
| Embedding model | all-MiniLM-L6-v2 + torch (1.1 GB) | qwen3-embed ONNX + LiteLLM (200 MB) |
| Output size | Unbounded (500K+ chars possible) | Paginated (default 500 nodes, truncation metadata) |
| Plugin hooks | Invalid PostEdit/PostGit events | Valid PostToolUse (Write, Edit, Bash) |
| Plugin MCP | Duplicate registration (.mcp.json + plugin.json) | Single source (plugin.json only) |
| Python version | 3.10+ | 3.13 (pinned) |
| CI/CD | GitHub Actions basic | PSR + Docker multi-arch + MCP Registry |
| Test coverage | Unknown | 95%+ enforced |
All fixes are submitted upstream as standalone PRs (see Upstream PRs). If all are merged, this repo will be archived.
Quick Start
Prerequisites
- Python 3.13 (required --
requires-python = "==3.13.*")
Option 1: uvx (Recommended)
{
"mcpServers": {
"better-code-review-graph": {
"command": "uvx",
"args": ["--python", "3.13", "better-code-review-graph", "serve"],
"env": {
// -- optional: cloud embeddings via LiteLLM
// "API_KEYS": "GOOGLE_API_KEY:AIza...",
// -- optional: LiteLLM Proxy (selfhosted gateway)
// "LITELLM_PROXY_URL": "http://10.0.0.20:4000",
// "LITELLM_PROXY_KEY": "sk-your-virtual-key"
// -- without API_KEYS, uses built-in local qwen3-embed ONNX (zero-config)
}
}
}
}
Option 2: pip
pip install better-code-review-graph
better-code-review-graph install # creates .mcp.json in project root
Option 3: Docker
{
"mcpServers": {
"better-code-review-graph": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-v", "crg-data:/data",
"-e", "API_KEYS",
"n24q02m/better-code-review-graph:latest"
],
"env": {
// -- optional: cloud embeddings
// "API_KEYS": "GOOGLE_API_KEY:AIza..."
}
}
}
}
Option 4: Claude Code Plugin
claude plugin install n24q02m/better-code-review-graph@better-code-review-graph
Then open your project and tell Claude:
Build the code review graph for this project
MCP Tools
Claude uses these automatically once the graph is built.
| Tool | Description |
|---|---|
build_or_update_graph_tool | Build or incrementally update the graph. Default: incremental (changed files only). |
get_impact_radius_tool | Blast radius of changed files. Shows which functions, classes, files are affected. Paginated with max_results. |
get_review_context_tool | Token-optimized review context with structural summary, source snippets, and review guidance. |
query_graph_tool | Predefined queries: callers_of, callees_of, imports_of, importers_of, children_of, tests_for, inheritors_of, file_summary. |
semantic_search_nodes_tool | Search code entities by name/keyword or semantic similarity (requires embeddings). |
embed_graph_tool | Compute vector embeddings for semantic search. Uses dual-mode backend. |
list_graph_stats_tool | Graph size, languages, node/edge breakdown, embedding count. |
get_docs_section_tool | Retrieve specific documentation sections for minimal token usage. |
find_large_functions_tool | Find functions/classes exceeding a line-count threshold for decomposition audits. |
Embedding Backends
Embeddings enable semantic search (vector similarity instead of keyword matching). Two backends are available:
| Backend | Config | Size | Description |
|---|---|---|---|
| local (default) | Nothing needed | ~570 MB (first use) | qwen3-embed ONNX. Zero-config. Downloaded on first embed_graph_tool call. |
| litellm | API_KEYS or LITELLM_PROXY_URL | 0 MB | Cloud providers via LiteLLM (Gemini, OpenAI, Cohere, etc.). |
- Auto-detection: If
API_KEYSorLITELLM_PROXY_URLis set, uses LiteLLM. Otherwise, uses local ONNX. - Override: Set
EMBEDDING_BACKEND=localorEMBEDDING_BACKEND=litellmexplicitly. - Fixed 768-dim storage: All embeddings stored at 768 dimensions via MRL truncation. Switching backends does NOT invalidate existing vectors.
- Lazy loading: Model downloads on first embed call, not on server start.
CLI Reference
better-code-review-graph install # Register MCP server with Claude Code (creates .mcp.json)
better-code-review-graph init # Alias for install
better-code-review-graph build # Full graph build (parse all files)
better-code-review-graph update # Incremental update (changed files only)
better-code-review-graph watch # Auto-update on file changes
better-code-review-graph status # Show graph statistics
better-code-review-graph serve # Start MCP server (stdio transport)
Configuration
| Variable | Default | Description |
|---|---|---|
EMBEDDING_BACKEND | (auto-detect) | local (qwen3-embed ONNX) or litellm (cloud API). Auto: API_KEYS/proxy -> litellm, else local. |
EMBEDDING_MODEL | gemini/gemini-embedding-001 | LiteLLM embedding model (only used when backend=litellm). |
API_KEYS | - | LLM API keys for SDK mode (format: ENV_VAR:key,...). Enables LiteLLM backend. |
LITELLM_PROXY_URL | - | LiteLLM Proxy URL. Enables LiteLLM backend via proxy. |
LITELLM_PROXY_KEY | - | LiteLLM Proxy virtual key. |
Ignore files
Create .code-review-graphignore in your project root to exclude paths:
generated/**
*.generated.ts
vendor/**
node_modules/**
Supported Languages
Python, TypeScript, JavaScript, Go, Rust, Java, C#, Ruby, Kotlin, Swift, PHP, C/C++
Each language has full Tree-sitter grammar support for functions, classes, imports, call sites, inheritance, and test detection.
Cross-Agent Compatibility
| Feature | Claude Code | Copilot CLI | Codex | Gemini CLI | Antigravity | OpenCode | Cursor | Windsurf | Cline | Amp |
|---|---|---|---|---|---|---|---|---|---|---|
| MCP tools (9 tools) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| CLAUDE.md / AGENTS.md | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | -- | -- |
| Skills (slash commands) | Yes | Yes | Yes | Yes | -- | Yes | -- | -- | -- | -- |
| Hooks (PostToolUse) | Yes | -- | Yes | Yes | -- | -- | -- | -- | -- | -- |
| Plugin (marketplace) | Yes | Yes | -- | -- | -- | -- | -- | -- | -- | -- |
Upstream PRs
All fixes in this fork are submitted as standalone PRs to the original code-review-graph:
- #37 -- Multi-word search AND logic
- #38 -- Parser call target resolution (fixes #20)
- #39 -- Impact radius output pagination
If all upstream PRs are merged, this repository will be archived.
Build from Source
git clone https://github.com/n24q02m/better-code-review-graph
cd better-code-review-graph
uv sync --group dev
uv run pytest
uv run better-code-review-graph serve
Requirements: Python 3.13 (not 3.14+), uv
Compatible With
Also by n24q02m
| Server | Description | Install |
|---|---|---|
| wet-mcp | Web search, content extraction, library docs | uvx --python 3.13 wet-mcp@latest |
| mnemo-mcp | Persistent AI memory with hybrid search | uvx mnemo-mcp@latest |
| better-notion-mcp | Notion API for AI agents | npx -y @n24q02m/better-notion-mcp@latest |
| better-email-mcp | Email (IMAP/SMTP) for AI agents | npx -y @n24q02m/better-email-mcp@latest |
| better-godot-mcp | Godot Engine for AI agents | npx -y @n24q02m/better-godot-mcp@latest |
| better-telegram-mcp | Telegram Bot API + MTProto for AI agents | uvx --python 3.13 better-telegram-mcp@latest |
License
MIT - See LICENSE
संबंधित सर्वर
Scout Monitoring MCP
प्रायोजकPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
प्रायोजकAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Unity-MCP
A bridge between the Unity game engine and AI assistants using the Model Context Protocol (MCP).
Directus
This server enables AI assistants and other MCP clients to interact with Directus instances programmatically.
mcp-nodejs
A Node.js MCP server example for the OpenWeather API, requiring an API key.
Project Zomboid MCP Server
An AI-powered MCP server for Project Zomboid mod development, offering script validation, generation, and contextual assistance.
Tailwind Svelte Assistant
Provides documentation and code snippets for SvelteKit and Tailwind CSS.
Victron ModBus TCP
Server that connects to Victron Energy GX devices on local network.
OpenAPI2MCP
Converts OpenAPI specifications into MCP tools, enabling AI clients to interact with external APIs seamlessly.
MCP Github OAuth
An MCP server with built-in GitHub OAuth support, deployable on Cloudflare Workers.
Swiftzilla
The only RAG API built for Apple Development. Give your AI instant access to 100,000+ pages of official docs, recipes, and evolution proposals.
Supra Code Generator MCP
Generates Supra Move contracts and TypeScript SDK code.