doctree-mcp
BM25 search + tree navigation over markdown docs for AI agents. No embeddings, no LLM calls at index time.
doctree-mcp
Agentic document retrieval over markdown — BM25 search + tree navigation via MCP.
Give an AI agent structured access to your markdown docs: it searches with BM25, reads the outline, reasons about which sections matter, and retrieves only what it needs. No vector DB, no embeddings, no LLM calls at index time.
Why
Standard RAG gives agents a bag of loosely relevant paragraphs. This gives them a table of contents they can reason over, plus a search engine that actually ranks by relevance.
search_documents("auth token refresh") → find candidate docs (BM25 ranked)
get_tree("docs:auth:middleware") → see the heading hierarchy
[n4] ## Token Refresh Flow (180 words)
[n5] ### Automatic Refresh (90 words)
[n6] ### Manual Refresh API (150 words)
[n7] ### Error Handling (200 words)
navigate_tree("docs:auth:middleware", "docs:auth:middleware:n4") → get exactly n4+n5+n6+n7
Context budget: 2K-8K tokens with precise content, vs 4K-20K tokens of noisy chunks from vector RAG.
Quick Start
# Install Bun if you don't have it
curl -fsSL https://bun.com/install | bash
# Run directly — no clone needed
DOCS_ROOT=/path/to/your/markdown/docs bunx doctree-mcp
Claude Desktop Configuration
{
"mcpServers": {
"doctree": {
"command": "bunx",
"args": ["doctree-mcp"],
"env": {
"DOCS_ROOT": "/path/to/your/markdown/docs"
}
}
}
}
Run from source
git clone https://github.com/joesaby/doctree-mcp.git
cd doctree-mcp
bun install
DOCS_ROOT=./docs bun run serve # stdio
DOCS_ROOT=./docs bun run serve:http # HTTP (port 3100)
MCP Tools
| Tool | Description |
|---|---|
list_documents | Browse catalog with tag/keyword filtering and facet counts |
search_documents | BM25 keyword search with facet filters and glossary expansion |
get_tree | Hierarchical outline for agent reasoning — structure and word counts, no content |
get_node_content | Retrieve full text of specific sections by node ID |
navigate_tree | Get a section and all descendants in one call |
Configuration
# .env
DOCS_ROOT=./docs # path to your markdown repository
DOCS_GLOB=**/*.md # file glob pattern
See docs/CONFIGURATION.md for multiple collections, ranking tuning, frontmatter best practices, and glossary setup.
Performance
| Operation | Latency | Token cost |
|---|---|---|
| Full index (900 docs) | 2-5s | 0 LLM tokens |
| Incremental re-index (5 changed) | ~50ms | 0 LLM tokens |
| Search | 5-30ms | ~300-1K tokens |
| Search with facet filters | 2-15ms | ~200-800 tokens |
| Tree outline | <1ms | ~200-800 tokens |
Memory: ~25-50MB for 900 docs with full positional index and facets.
Docs
- Architecture & Design — BM25, tree navigation, Pagefind/PageIndex attribution
- Configuration Reference — env vars, frontmatter, ranking tuning, glossary
- Competitive Analysis — comparison with PageIndex, QMD, GitMCP, Context7
Standing on Shoulders
- PageIndex — Hierarchical tree navigation and the agent reasoning workflow
- Pagefind by CloudCannon — BM25 scoring, positional index, filter facets, density excerpts, stemming, and more. Full attribution in DESIGN.md.
- Bun.markdown by Oven — Native CommonMark parser enabling zero-cost tree construction from raw markdown
License
MIT
Related Servers
Qdrant RAG MCP Server
A semantic search server for codebases using Qdrant, featuring intelligent GitHub issue and project management.
Krep MCP Server
A high-performance string search server powered by the krep binary.
SearXNG
A privacy-respecting metasearch engine powered by a self-hosted SearXNG instance.
Deep Research
A server for conducting deep research and generating reports.
Calibre
Search and read books from your Calibre ebook library.
GPT Researcher
Conducts autonomous, in-depth research by exploring and validating multiple sources to provide relevant and up-to-date information.
NPMLens MCP
NPMLens MCP lets your coding agent (such as Claude, Cursor, Copilot, Gemini or Codex) search the npm registry and fetch package context (README, downloads, GitHub info, usage snippets). It acts as a Model‑Context‑Protocol (MCP) server, giving your AI assistant a structured way to discover libraries and integrate them quickly.
JinaAI Grounding
Enhances LLM responses with factual, real-time web content using Jina AI's grounding capabilities.
Gemini DeepSearch MCP
An automated research agent using Google Gemini models and Google Search to perform deep, multi-step web research.
Fuel Network & Sway Language
Semantic search for Fuel Network and Sway Language documentation using a local vector database.