Lean KG
LeanKG: Stop Burning Tokens. Start Coding Lean.
LeanKG
Lightweight Knowledge Graph for AI-Assisted Development
LeanKG is a local-first knowledge graph that gives AI coding tools accurate codebase context. It indexes your code, builds dependency graphs, and exposes an MCP server so tools like Cursor, OpenCode, and Claude Code can query the knowledge graph directly. No cloud services, no external databases.
Visualize your knowledge graph with force-directed layout, WebGL rendering, and community clustering.

See docs/web-ui.md for more features.
Live Demo
Try LeanKG without installing: https://leankg.onrender.com
leankg web --port 9000
Installation
One-Line Install (Recommended)
curl -fsSL https://raw.githubusercontent.com/FreePeak/LeanKG/main/scripts/install.sh | bash -s -- <target>
Supported targets:
| Target | AI Tool | Auto-Installed |
|---|---|---|
opencode | OpenCode AI | Binary + MCP + Plugin + Skill + AGENTS.md |
cursor | Cursor AI | Binary + MCP + Skill + AGENTS.md + Session Hook |
claude | Claude Code | Binary + MCP + Plugin + Skill + CLAUDE.md + Session Hook |
gemini | Gemini CLI | Binary + MCP + Skill + GEMINI.md |
kilo | Kilo Code | Binary + MCP + Skill + AGENTS.md |
antigravity | Google Antigravity | Binary + MCP + Skill + GEMINI.md |
Examples:
curl -fsSL https://raw.githubusercontent.com/FreePeak/LeanKG/main/scripts/install.sh | bash -s -- cursor
curl -fsSL https://raw.githubusercontent.com/FreePeak/LeanKG/main/scripts/install.sh | bash -s -- claude
Install via Cargo or Build from Source
cargo install leankg && leankg --version
git clone https://github.com/FreePeak/LeanKG.git && cd LeanKG && cargo build --release
Quick Start
leankg init # Initialize LeanKG in your project
leankg index ./src # Index your codebase
leankg watch ./src # Auto-index on file changes
leankg impact src/main.rs --depth 3 # Calculate blast radius
leankg status # Check index status
leankg metrics # View token savings
leankg web # Start Web UI at http://localhost:8080
# Obsidian vault sync
leankg obsidian init # Initialize Obsidian vault structure
leankg obsidian push # Push LeanKG data to Obsidian notes
leankg obsidian pull # Pull annotation edits from Obsidian
leankg obsidian watch # Watch vault for changes and auto-pull
leankg obsidian status # Show vault status
# Microservice call graph (via Web UI)
leankg web # Start Web UI at http://localhost:8080
# Then visit http://localhost:8080/services
See docs/cli-reference.md for all commands.
How LeanKG Helps
graph LR
subgraph "Without LeanKG"
A1[AI Tool] -->|Scans entire codebase| B1[10,000+ tokens]
B1 --> A1
end
subgraph "With LeanKG"
A2[AI Tool] -->|13-42 tokens| C[LeanKG Graph]
C -->|Targeted subgraph| A2
end
Without LeanKG: AI scans entire codebase (~10,000+ tokens). With LeanKG: AI queries knowledge graph for targeted context (13-42 tokens). 98% token saving for impact analysis.
Highlights
- Auto-Init -- Install script configures MCP, rules, skills, and hooks automatically
- Auto-Trigger -- Session hooks inject LeanKG context into every AI tool session
- Token Concise -- 13-42 tokens per query vs 10,000+ for full codebase scan
- Token Saving -- Up to 98% token reduction for impact analysis
- Impact Radius -- Compute blast radius before making changes
- Dependency Graph -- Build call graphs with
IMPORTS,CALLS,TESTED_BYedges - MCP Server -- Expose graph via MCP protocol for AI tool integration
- Multi-Language -- Index Go, TypeScript, Python, Rust, Java, Kotlin with tree-sitter
- Android -- Extract XML layouts, resources, and manifest relationships
See docs/architecture.md for system design and data model details.
Supported AI Tools
| Tool | Auto-Setup | Session Hook | Plugin |
|---|---|---|---|
| Cursor | Yes | session-start | - |
| Claude Code | Yes | session-start | Yes |
| OpenCode | Yes | - | Yes |
| Kilo Code | Yes | - | - |
| Gemini CLI | Yes | - | - |
| Google Antigravity | Yes | - | - |
| Codex | Yes | - | - |
Note: Cursor requires per-project installation. The AI features work on a per-workspace basis, so LeanKG should be installed in each project directory where you want AI context injection.
See docs/agentic-instructions.md for detailed setup and auto-trigger behavior.
Context Metrics
Track token savings to understand LeanKG's efficiency.
leankg metrics --json # View with JSON output
leankg metrics --since 7d # Filter by time
leankg metrics --tool search_code # Filter by tool
See docs/metrics.md for schema and examples.
Update
# Check current version
leankg version
# Update LeanKG binary via install script
curl -fsSL https://raw.githubusercontent.com/FreePeak/LeanKG/main/scripts/install.sh | bash -s -- update
# Obsidian vault sync
leankg obsidian init # Initialize Obsidian vault
leankg obsidian push # Push LeanKG data to Obsidian notes
leankg obsidian pull # Pull annotation edits from Obsidian
Documentation
| Doc | Description |
|---|---|
| docs/cli-reference.md | All CLI commands |
| docs/mcp-tools.md | MCP tools reference |
| docs/agentic-instructions.md | AI tool setup & auto-trigger |
| docs/architecture.md | System design, data model |
| docs/web-ui.md | Web UI features |
| docs/metrics.md | Metrics schema & examples |
| docs/benchmark.md | Performance benchmarks |
| docs/roadmap.md | Feature planning |
| docs/tech-stack.md | Tech stack & structure |
| docs/android-extraction.md | Android XML & resource extraction |
Troubleshooting
Database Lock Error
If you see database is locked (code 5), another LeanKG process is holding the database:
# Kill all leankg and vite processes
leankg-kill
# Or manually
pkill -9 -f "leankg"
pkill -9 -f "vite"
Process Management
leankg-kill # Kill all leankg and vite processes (after adding to ~/.zshrc)
leankg-status # Show running leankg/vite processes
Important: Always kill the web server before indexing to avoid database lock conflicts.
Performance Benchmarks
Load Test Results (100K nodes)
| Test | Throughput |
|---|---|
| Insert elements | ~173,000 elements/sec |
| Insert relationships | ~179,000 relationships/sec |
| Retrieve all elements | ~662,000 elements/sec |
Run load tests:
cargo test --release load_test -- --nocapture
See docs/analysis/load-testing-1m-nodes-2026-04-17.md for detailed performance analysis.
Requirements
- Rust 1.70+
- macOS or Linux
License
MIT
Star History
Servidores relacionados
Scout Monitoring MCP
patrocinadorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
patrocinadorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
SYKE - AI Code Impact Analysis
Live dependency graph and impact analysis MCP server for AI coding agents. Runs PASS/WARN/FAIL build gates before code changes to prevent cascade failures. Supports TS, Python, Dart, Go, Rust, Java, C++, Ruby.
CrowdCent MCP Server
Integrates with the CrowdCent Challenge API, allowing AI assistants to manage prediction challenges, datasets, and submissions.
Clelp MCP Server
Discover and rate 1,700+ MCP servers and AI agent skills with community ratings from real usage.
Gemini CLI RAG MCP
A RAG-based Q&A server using a vector store built from Gemini CLI documentation.
Adobe After Effects
Control Adobe After Effects through a standardized protocol, enabling AI assistants and other applications.
Adobe After Effects MCP
An MCP server that allows AI assistants to interact with Adobe After Effects.
rftools
203 RF & electronics calculators + 13 server-side simulation tools for AI agents.
Aider MCP Server
An MCP server for offloading AI coding tasks to Aider, enhancing development efficiency and flexibility.
Sandbox MCP Server
Provides isolated Docker environments for secure code execution.
Adaptive Graph of Thoughts
An intelligent scientific reasoning framework that uses graph structures and Neo4j to perform advanced reasoning via the Model Context Protocol (MCP).