recon

35× fewer tokens for AI coding agents

Query p50

14ms

Rust compiler · 320K symbols

Token reduction

15–30×

vs Read/Grep/Glob on exploration

Index freshness

<1s

563 ms warm re-index

Languages

9

graph-aware · Rust, Python, TS, JS, Go, Java, C, C++

01 — Where it runs

Drops into the agents you already use.

recon init --mcp <ide> writes the IDE's MCP config and a strict-policy agent rules file so the agent uses recon's code_* tools before Read/Grep/Glob by default.

cc

Claude Code

recon init --mcp cc

MCP: ./.mcp.json
Rules: ./CLAUDE.md

oc

OpenCode

recon init --mcp oc

MCP: ./opencode.jsonc
Rules: ./AGENTS.md

cur

Cursor

recon init --mcp cursor

MCP: ./.cursor/mcp.json
Rules: ./.cursor/rules/recon.mdc

win

Windsurf

recon init --mcp windsurf

MCP: ~/.codeium/windsurf/mcp_config.json
Rules: ./.windsurf/rules/recon.md

02 — Tools

Twenty tools that speak symbols, not strings.

Twenty tools — symbol-and-text primitives, graph-aware traversal for "how does X reach Y?" and "what breaks if I change this?", and telemetry so you can prove the savings.

code_outline(path)

One line per symbol — kind, name, line. Skim a 2000-line file in a single screen.

↳ Read~13 ms

code_skeleton(path)

Signatures and docs only; bodies collapse to . About 10× compression vs. full read.

↳ Read~11 ms

code_read_symbol(…)

Pull a single function, its signature, doc, body — and every caller — without loading the file.

↳ Read<10 ms

code_find_symbol(name)

Three-tier resolver: exact SQLite → Tantivy BM25 → FTS5 trigram with nucleo fuzzy rescore.

↳ Grep~8 ms

code_find_refs(…)

Reference count and the top-k call sites for any symbol in the repo.

↳ Grep~12 ms

code_search(query, mode)

Exact · regex · hybrid. Tantivy-first, grep fallback. Filter DSL: type:rust !test.

↳ Grep~33 ms

code_list(glob?)

Structured file listing with symbol counts — single GROUP BY query, not per-file.

↳ Glob~57 ms

code_repo_map(budget)

PageRank-ranked repo overview, cached in SQLite, invalidated on reindex.

—~19 ms

code_find_strings(…)

Search string literals and comments separately from identifiers. Grep, but narrower.

—<30 ms

code_multi_find(patterns[])

Fan out several patterns through the TextSearcher trait in a single call.

—<30 ms

code_reindex()

Agent-triggered re-index. Clears map cache, rebuilds incrementally.

—varies

code_path(src, dst)

Shortest call-graph path from src to dst. Bidirectional BFS, capped at 8 hops.

↳ chained refs<5 ms

code_callers(sym, depth)

Transitive callers up to N rings (default 1, max 6). Cycle-safe; per-tier fan-out cap 50; total-visit cap 50 000.

↳ chained refs<10 ms

code_callees(sym, depth)

Mirror of callers — what does sym call (directly and transitively)? Same caps, same shape.

code_context(sym, budget)

One-shot bundle: signature, body, callers, callees, types, tests. Replaces the canonical understand-X loop.

↳ 4-call loop~12 ms

code_impact(sym, depth)

Blast radius: transitive callers + reachable tests. Use before refactors to answer "what might break?"

—<15 ms

code_subsystems(limit?)

Weakly-connected components of the reference graph, ranked by hub. Architectural orientation, no directory lore.

—<15 ms

code_subsystem(id, budget)

Drill into one subsystem from code_subsystems. Skeleton-style summary within a token budget.

—<10 ms

code_activate_repo(path)

Switch the active repository for subsequent stateful tools. Tier-limit aware via RepoRouter; persists the loaded set across recon serve restarts.

↳ multi-repo<5 ms

code_list_repos()

Loaded repos with files, symbols, and an active flag. Pairs with code_activate_repo for discovery before switching.

↳ multi-repo<1 ms

Token-savings telemetry runs alongside every call — per-call measured against the in-process Read/grep equivalent for the 8 direct file/grep tools, conservativestatic estimates for the 10 graph/ranking tools. Operator surfaces:recon stats and recon savings show in the CLI; daily rollups on the dashboard.

03 — The difference

An agent reading code shouldn't burn the context window doing it.

Without recon Read · Grep · Glob

// agent wants to understand render_map Glob("crates/**/*.rs") → 312 paths · ~12,000 tokens

Read("crates/recon-search/src/map.rs") → 412 lines · ~6,800 tokens

Grep("render_map", "crates/**") + context → 34 matches × 3-line window · ~28,000 tokens

Read("crates/recon-search/tests/map_test.rs") → 142 lines · ~2,300 tokens

Read 4 callers (server.rs, handler.rs, …) → 1,847 lines · ~31,000 tokens

Read("crates/recon-search/src/page_rank.rs") → 218 lines · ~3,600 tokens

Read("crates/recon-search/src/lib.rs") → 312 lines · ~5,100 tokens

Read 6 more files for orientation → 2,170 lines · ~27,600 tokens

subtotal ≈ 116,400 tokens

tokens burned116,400

With recon code_* · graph-aware

// same question, symbol-first + graph-aware code_find_symbol("render_map") → exact match · ~90 tokens

code_outline("crates/recon-search/src/map.rs") → 14 symbols, one line each · ~210 tokens

code_read_symbol("render_map") → signature, body, every caller · ~860 tokens

code_context("render_map", budget=2000) → callers, callees, types, tests · ~1,520 tokens

code_impact("render_map", depth=3) → blast radius: 12 callers, 4 tests · ~450 tokens

code_repo_map(focus="…/map.rs", budget=1500) → pageranked overview · ~1,100 tokens

code_list("crates/**/*.rs") → 312 files, symbol counts per file · ~680 tokens

code_context("render_map", budget=2000) → signature + body + callers + tests · ~1,300 tokens

subtotal ≈ 6,210 tokens

tokens burned6,210 · ~19× less

04 — What you save

A benchmark that doesn't round up.

Headline multiples like 35× are real for fresh input tokens. Anthropic's prompt caching already absorbs most of the obvious savings on warm sessions, so the number that actually hits your bill is smaller. Here's the honest breakdown from an independent third-party measurement (grepai vs Claude Code, 155K-LOC TypeScript repo, 5 questions × 5 runs):

Fresh input tokens

-97%

51,147 → 1,326

Tool calls

-55%

139 → 62

Subagents

-100%

5 → 0 launched

Billed $ cost

-27%

$6.78 → $4.92

The gap between −97% fresh-input and**−27% billed cost** is prompt caching doing its job. Savings dominate on cold sessions,large repos, andtasks where the agent would otherwise spawn subagents. They shrink on warm sessions and on repos small enough that the agent would read the whole thing anyway.

05 — Principles

Built on four small, stubborn ideas.

I

Symbols first, bytes last.

Everything tree-sitter can name is a first-class citizen. Everything else is noise the agent should never pay for.

II

Five output shapes. No more.

Every tool response is one of five canonical shapes in recon-core::shapes. Predictable for the model, cheap to parse.

III

Incremental by default.

gix ColdStart skips reparse on unchanged HEAD. A blake3 Merkle tree reindexes only what moved. First keystroke to queryable: under a second.

IV

Secrets stay redacted.

Every tool response passes through secret redaction — AWS keys, PEM blocks, API tokens are stripped before they reach the agent. Per-key tenant isolation.

06 — Benchmarks

Tested on real codebases. Not toy repos.

Zed

EDITOR · RUST

github →

LOC
1.3M

SYMBOLS
80K

COLD INDEX
28s

QUERY P50
13ms

find 13ms skeleton 19ms search 15ms refs 13ms map 13ms

Rust compiler

COMPILER · RUST

github →

LOC
3.8M

SYMBOLS
320K

COLD INDEX
55s

QUERY P50
14ms

find 14ms skeleton 17ms search 36ms refs 15ms map 24ms

All numbers: release build · warm cache · mimalloc · fat LTO · lock-free ReadPool · CSR PageRank · 25 MB binary

07 — Install

One signed binary. Your code stays local.

Step 01

Sign up with GitHub

Free. No credit card.

Sign in with GitHub →

Step 02

Install the binary

Signed with cosign. Only your license key touches the network.

Linux macOS Windows

curl -fsSL https://mcprecon.pages.dev/install.sh | bash

iwr https://mcprecon.pages.dev/install.ps1 | iex

recon login sk-recon-xxx

Step 03

Wire it into your agent

recon init --mcp <ide> writes the IDE's MCP config and a strict agent-rules file so the agent uses code_* tools by default.

recon init --mcp cc # Claude Code recon init --mcp oc # OpenCode recon init --mcp cursor # Cursor recon init --mcp windsurf # Windsurf

Step 04

Hit a limit? Upgrade.

Free · 1 repo · 250 files · 10K LOC
Pro $3/mo · 10 repos · 5K files · 200K LOC
Team $7/mo · 25 repos · 50K files · 4M LOC

View pricing →

08 — Questions

Things security-minded buyers ask us first.

Does recon see any of my code?

No. Indexing, search, and every tool response runs on your machine. The only outbound call is a short HTTPS check to validate your license key — no paths, no queries, no source, no metadata.

Can I run it offline or air-gapped?

Yes, once activated. recon login caches a signed license locally; after that, the CLI runs with no network at all. Enterprise keys can be issued offline end-to-end.

How do I verify the binary is the one you built?

Every release ships a SHA256SUMS.txt signed with cosign keyless sigstore, attested by our GitHub Actions release workflow. The installer verifies digest + signature before extracting. You can re-verify by hand with cosign verify-blob.

Which IDEs and agents work with recon?

Any MCP-compatible client. recon init writes config for Claude Code, Cursor, Windsurf, and OpenCode out of the box. Anything that speaks the MCP stdio spec will work too.

What happens when my subscription ends?

Your on-disk indexes stay. The CLI declines new tool calls until you renew — nothing is uploaded, nothing is deleted. You can keep the binary; you just can't run it against a licensed tier until you re-login.

Do you store anything server-side?

Only what's needed to issue and bill a license: your GitHub email, your tier, and a hashed key prefix. No repo names, no file paths, no symbols. Our worker is open to inspection on request for enterprise accounts.

“The agent should spend its tokens thinking, not reading. recon gives back the context window — and most of the wall-clock — that Read and Grep quietly took. And it does it without ever sending a line of your code over the wire.”

— design note, recon-core · v0.3.1

Server Terkait

NotebookLM Web Importer

Impor halaman web dan video YouTube ke NotebookLM dengan satu klik. Dipercaya oleh 200.000+ pengguna.

Instal Ekstensi Chrome