repo-graph

Structural graph map of any codebase. LLM queries the graph instead of grepping through everything. 13 languages, auto-detected flows, cross-stack linking. Zero deps.

repo-graph

repo-graph MCP server

Structural graph memory for AI coding assistants. Map your codebase. Navigate by structure. Read only what matters.

repo-graph gives LLMs a map of your codebase — entities, relationships, and flows — so they can navigate to the right files without reading everything first.

Instead of flooding an LLM's context window with your entire codebase (or hoping it guesses right), repo-graph builds a lightweight graph of what exists, how things connect, and where the entry points are. The LLM queries the graph, finds the minimal set of files it needs, and reads only those.

Demo

https://github.com/user-attachments/assets/a1e4171b-b225-40d4-9210-39453e14b76a

https://github.com/user-attachments/assets/fc3191e5-fc35-4bd7-8372-72af55995883

Same bug, same model, same prompt — the only difference is whether repo-graph is installed.

The task: fix a reversed comparison operator in a Go + Angular monorepo (566 nodes, 620 edges).

Without repo-graphWith repo-graph
Tokens used75,30829,838
Time to fix4m 36s~30s
Files explored~15 (grep, read, grep, read...)2 (flow lookup + handler file)
OutcomeFound and fixed the bugFound and fixed the bug

2.5x fewer tokens. ~9x faster. Same correct fix.

How the test was run

Both runs used identical conditions to keep the comparison fair:

  • Same model: Claude Opus, 100% (no Haiku routing)
  • Same prompt: "Groups that were created recently are showing as closed, and old groups show as open. This is backwards — new groups should be open for members to join. Find and fix the bug."
  • Fresh context: each run started from /clear with no prior conversation
  • No other tools: CLAUDE.md, plugins, hooks, and all other MCP servers were removed for both runs — the only variable was whether repo-graph was installed
  • No hints: the prompt describes the symptom, not the location — Claude has to find group_controller.go:57 on its own

Without repo-graph, Claude greps for keywords, reads files, greps again, reads more files, and eventually narrows down to the bug. With repo-graph, Claude calls flow("groups"), gets back the exact handler function and file, reads it, and fixes it.

Browse pre-generated examples for FastAPI, Gin, Hono, and NestJS — real graph output you can inspect without installing anything.

The problem

LLMs working on code waste most of their context on orientation:

  • Reading files that turn out to be irrelevant
  • Missing connections between components in different languages
  • Not knowing where a feature starts or what it touches
  • Loading 50 files when 5 would do

This is expensive, slow, and gets worse as codebases grow.

How repo-graph solves it

repo-graph scans your codebase once and builds a graph of:

  • Entities: modules, packages, classes, functions, routes, services, components
  • Relationships: imports, calls, handles, defines, contains
  • Flows: end-to-end paths from entry point to data layer

Then it exposes 12 MCP tools that let the LLM:

  1. Orient — "What languages are in this repo? What are the main features?"
  2. Navigate — "Trace the login flow from route to database" / "What's the shortest path between UserService and the payments API?"
  3. Scope — "How many lines would I need to read to understand this feature?" / "Give me just the files I need for this bug fix"
  4. Assess — "What's the blast radius of changing this function?" / "Which files are the biggest maintenance risks?"

The LLM gets structural context in a few hundred tokens instead of reading thousands of lines.

Supported languages

LanguageDetectionWhat it extracts
Gogo.modPackages, functions, HTTP routes (gin/echo/chi/stdlib), imports
RustCargo.tomlCrates, modules, structs, traits, functions, routes (Actix/Rocket/Axum)
TypeScripttsconfig.jsonModules, classes, functions, import relationships
Reactreact in package.jsonComponents, hooks, context providers, React Router routes, fetch/axios calls, flows
Angular@angular/core in package.jsonComponents, services, guards, DI injection, HTTP calls, feature flows
Pythonpyproject.toml / setup.py / requirements.txtPackages, modules, classes, functions, routes (Flask/FastAPI/Django)
Java/Kotlinpom.xml / build.gradlePackages, classes, routes (Spring/JAX-RS)
C#/.NET.csproj / .slnNamespaces, classes, routes (ASP.NET/Minimal API)
RubyGemfile / .gemspecFiles, classes, modules, routes (Rails)
PHPcomposer.jsonNamespaces, classes, interfaces, routes (Laravel/Symfony)
SwiftPackage.swift / .xcodeprojFiles, types (class/struct/enum/protocol/actor), routes (Vapor)
C/C++CMakeLists.txt / Makefile / meson.buildSources, headers, classes, structs, enums, namespaces, includes
SCSS.scss files presentFile-level bloat analysis (selector blocks, sizes)

Multiple analyzers can match one repo (e.g., Go backend + Angular frontend + SCSS). Each contributes its nodes and edges into a single unified graph.

Install

pip install mcp-repo-graph

Requires Python 3.11+. Only runtime dependency: mcp[cli].

Quick start

1. Generate the graph

repo-graph-generate --repo /path/to/your/project

This scans the codebase and writes graph data to .ai/repo-graph/ inside the target repo.

2. Connect to your AI assistant

Add to your MCP configuration:

Claude Code (~/.claude/claude_code_config.json or project .mcp.json):

{
  "mcpServers": {
    "repo-graph": {
      "command": "repo-graph",
      "args": ["--repo", "/path/to/your/project"]
    }
  }
}

With environment variable:

{
  "mcpServers": {
    "repo-graph": {
      "command": "repo-graph",
      "env": { "REPO_GRAPH_REPO": "/path/to/your/project" }
    }
  }
}

3. Use it

The AI assistant now has access to all 12 tools. Example queries it can answer:

  • "What does this codebase do?" -> status tool
  • "Trace the checkout flow" -> flow tool
  • "What would break if I change UserService?" -> impact tool
  • "What files do I need for this bug?" -> minimal_read tool
  • "This file is too big, how should I split it?" -> split_plan tool
  • "Show me the auth flow visually" -> graph_view tool

4. Keep it fresh with a git hook (recommended)

Add repo-graph-generate to a pre-commit hook so the graph stays up to date automatically — no LLM context spent on regeneration:

# .git/hooks/pre-commit (or add to your existing hook)
#!/bin/sh
repo-graph-generate --repo .
git add .ai/repo-graph/
chmod +x .git/hooks/pre-commit

Every commit keeps the graph current. The LLM always has a fresh map without wasting a single token on generate.

Tip: If you don't want graph data in version control, add .ai/repo-graph/ to .gitignore and skip the git add line — the graph will just live locally.

MCP tools reference

Generation

ToolParametersDescription
generate(none)Scan the codebase from scratch, rebuild the graph, and reload
reload(none)Reload graph data from disk (after external repo-graph-generate)

Navigation

ToolParametersDescription
status(none)Repo overview: git state, detected languages, entity counts, available flows
flowfeatureEnd-to-end flow for a feature — from entry point through service layer to data
tracefrom_id, to_idShortest path between any two nodes in the graph
impactnode_id, direction (upstream/downstream), depthFan out from a node to see what it affects or depends on
neighboursnode_idAll direct connections to and from a node

Context budgeting

ToolParametersDescription
costfeatureTotal line count for all files in a feature's flow
hotspotstop_nFiles ranked by size * connections — maintenance risk indicators
minimal_readfeature, task_hintSmallest file set needed for a specific task within a feature

Health analysis

ToolParametersDescription
bloat_reportfile_pathInternal structure of a file: functions/methods ranked by size, type counts
split_planfile_pathConcrete suggestions for splitting an oversized file, grouped by responsibility
graph_viewfeature or node, depthVisual ASCII map of a feature flow, node neighbourhood, or full graph overview

How it works

  1. Detectscan_project_dirs() finds project roots (including monorepo layouts like packages/*, apps/*, services/*, src/*). Each analyzer checks for its marker files.
  2. Scan — matching analyzers extract entities and relationships using regex heuristics. No AST parsing, no external toolchains, no build step required.
  3. Merge — all analyzer results merge into a single graph. Nodes deduplicate by ID, edges by (from, to, type).
  4. Serve — the MCP server loads the graph into memory and exposes BFS-based traversal tools.

Graph data format

Generated files live in .ai/repo-graph/ inside the target repo:

  • nodes.json[{id, type, name, file_path}, ...]
  • edges.json[{from, to, type}, ...]
  • flows/*.yaml — named feature flows with ordered step sequences
  • state.md — human-readable snapshot for quick orientation

Edge types: imports, defines, contains, uses, calls, handles, handled_by, exports, includes.

Adding a new analyzer

Create repo_graph/analyzers/<language>.py:

from .base import AnalysisResult, Edge, LanguageAnalyzer, Node, scan_project_dirs, rel_path, read_safe

class MyLangAnalyzer(LanguageAnalyzer):

    @staticmethod
    def detect(repo_root):
        # Check for language marker files
        return any(
            (d / "my-marker").exists()
            for d in scan_project_dirs(repo_root)
        )

    def scan(self):
        nodes, edges = [], []
        # ... scan files, extract entities, build relationships ...
        return AnalysisResult(
            nodes=nodes,
            edges=edges,
            state_sections={"MyLang": f"{len(nodes)} entities\n"},
        )

    # Optional: file-level analysis for bloat_report / split_plan
    def supported_extensions(self):
        return {".mylang"}

    def analyze_file(self, file_path):
        # Return dict with function/method sizes, class counts, etc.
        pass

    def format_bloat_report(self, analysis):
        # Format the analysis dict into a human-readable string
        pass

Register it in analyzers/__init__.py by adding it to _analyzer_classes().

License

MIT

Support

If repo-graph saved you time, consider buying me a coffee.

Buy Me a Coffee
buymeacoffee.com/polycrisis

Servidores relacionados

NotebookLM Web Importer

Importa páginas web y videos de YouTube a NotebookLM con un clic. Utilizado por más de 200,000 usuarios.

Instalar extensión de Chrome