Detrix
Agentic debugger
Detrix
Give your AI agent eyes inside any running program.
- Watch any variable at any line — agent sets an observation point in seconds, zero code changes during debugging
- Local or cloud — same workflow for Docker containers and remote hosts
- Python, Go, Rust — observation points capture values without pausing, without restarting
- Built for agents — observe, query, manage observations via natural language; Claude Code, Cursor, Windsurf
See It in Action
1-minute demo: A Go service in Docker starts reporting wrong numbers. Nothing in the client code changed. Claude watches it live and finds the cause.
Here's what the session looks like:
You: "The order-service in Docker is reporting wildly inflated revenue.
Daily total is in the thousands when transactions should be $20-200."
Agent: → reads the source, spots dead unit-conversion code
→ wakes the service via Detrix
→ observes txn.Amount, txn.Unit, total
(no line number — find_variable locates it in the running container)
"Found it. The pricing API changed what it's returning.
txn.Amount = 8957, txn.Unit = "cents" → should be $89.57, summed as $8957
In calculateRevenue, the unit field is fetched and discarded:
unit := txn.Unit
_ = unit // ← conversion was intended, never implemented
total += amount // ← adds cents as if they were dollars
This didn't matter before. It matters now."
→ fixed the client code
→ reported the API contract change upstream
No code was modified to instrument. No restarts. The old workflow — add a log line, rebuild, redeploy, wait for the bug to reproduce — replaced by watching it live.
You don't need to know the line number either — describe the behavior and the agent finds where to look.
Why Detrix?
You hit a bug. The old workflow: add a print, restart, reproduce, remove the print, repeat. If it's in production, redeploy. If it's in a Docker container, get into the container. If it's intermittent, wait.
With Detrix, you just ask the agent. It finds the right line, plants an observation point, and tells you what it sees — live, nothing restarting.
That bug that cost you hours last week — redeploy after redeploy, still can't reproduce — your agent can investigate it in minutes, while your app keeps running.
print() / logging | Detrix | |
|---|---|---|
| Iteration speed | Hours (edit → rebuild → deploy) | Minutes |
| Add new observation | Edit code → restart | Ask the agent — no code, no restart¹ |
| Production-safe | Output pollution, perf risk | Non-breaking observation points |
| Events | Ephemeral stream | Stored, queryable by metric and time |
| Capture control | Every hit, no filtering | Throttle, sample, first-hit, interval |
| Cleanup | Manual (easy to forget, ships to prod) | One command — or automatic expiry |
| Sensitive data | Secrets can leak via log output | Sensitive-named vars blocked by default; configurable blacklist + whitelist in detrix.toml |
¹ Embed
detrix.init()once for zero restarts forever. Or restart once to attach the debugger (--debugpy,dlv,lldb-dap) — from that point on, the agent adds and removes observations without any further restarts.
Quick Start
Try it in 2 minutes. Your agent handles everything after step 3.
1. Install Detrix
macOS (Homebrew):
brew install flashus/tap/detrix
macOS / Linux (shell script):
curl --proto '=https' --tlsv1.2 -LsSf \
https://github.com/flashus/detrix/releases/latest/download/detrix-installer.sh | sh
Windows (PowerShell):
irm https://github.com/flashus/detrix/releases/latest/download/detrix-installer.ps1 | iex
Docker (linux/amd64, linux/arm64):
docker pull ghcr.io/flashus/detrix:latest
Build from source:
cargo install --git https://github.com/flashus/detrix detrix
Then initialise (creates config and sets up local storage):
detrix init
2. Add to your app
One line — the debugger sleeps until your agent needs it, zero overhead when idle:
import detrix
detrix.init(name="my-app")
Go and Rust work the same way — see App Integration.
3. Connect your agent
Claude Code:
claude mcp add --scope user detrix -- detrix mcp
Cursor / Windsurf — add to .mcp.json in your project root:
{
"mcpServers": {
"detrix": {
"command": "detrix",
"args": ["mcp"]
}
}
}
For cloud setup and other editors, see the setup guide.
That's it. Ask your agent to observe any line in your running app — no restarts, nothing ships to prod.
Alternative: connect without embedding
Don't want to add a dependency? Start your app directly under a debugger instead:
# Python
python -m debugpy --listen 127.0.0.1:5678 app.py
# Go
dlv debug --headless --listen=127.0.0.1:5678 --api-version=2 main.go
# Rust
lldb-dap --port 5678
Listens on 127.0.0.1 — local only. See the language setup guide for remote and Docker.
How It Works
Detrix is a daemon that runs locally or in the cloud and connects your AI agent to any running process via 29 MCP tools. Under the hood, it talks to your app's debugger via the Debug Adapter Protocol (DAP). It sets logpoints — breakpoints that evaluate an expression and log the result instead of pausing. Your application runs at full speed; Detrix captures the values.
AI Agent Detrix Daemon Debugger (DAP) Your App
(Claude Code, Cursor, (local or Docker/cloud) debugpy / dlv / (Python/Go/Rust,
Windsurf, local) lldb-dap local/cloud)
│ │ │ │
│── "observe line 127" ──▶│ │ │
│ │── set logpoint ─────────▶│ │
│ │ │── captures value ───▶│
│ │◀────────────── captured values ─────────────────│
│◀── structured events ───│ │ │
│ │ │ │
│ App never pauses. No code changes. No restarts. │
The daemon runs locally or alongside your service in Docker — same protocol either way. In cloud mode, source files are fetched automatically so the agent can find the right lines without them on your machine. See the Installation Guide for cloud setup.
App Integration
import detrix
detrix.init(name="my-app") # That's it. Agent controls the rest.
| Language | Install | Docs |
|---|---|---|
| Python | pip install detrix-py | Python Client |
| Go | go get github.com/flashus/detrix/clients/go | Go Client |
| Rust | detrix-rs = "1.1.1" in Cargo.toml | Rust Client |
Production pattern: Build one service instance with debug symbols and a Detrix client. Route suspect traffic to it via Kafka, a sidecar, or your load balancer. The rest of your fleet runs unaffected — full-speed, no instrumentation overhead. You get deep observability on one instance without touching production.
See the Clients Manual for full documentation.
Features
No code changes. The agent instruments your running code via observation points — nothing gets committed, nothing ships to prod.
No pausing. Observation points evaluate expressions at full execution speed, with no breakpoint-style halting. For high-frequency code paths, use sample or throttle modes to control event volume.
No forgotten cleanup. Metrics expire automatically via TTL, or remove everything with one command.
| Agent tools | 29 MCP tools — observe any line, query events, enable/disable observation groups, and clean up; no line number needed |
| Zero-downtime instrumentation | Add metrics without restarting your app |
| Multi-variable capture | Capture multiple variables per observation point |
| Capture modes | Stream, sample, throttle, first-hit, periodic sampling (every N sec) |
| Runtime introspection | Stack traces, memory snapshots, variable inspection, expression evaluation |
| Multi-language | Python (debugpy), Go (delve), Rust (lldb-dap) |
| Cloud debugging | Observe Docker containers and remote hosts — no VPN, no port forwarding |
| Durable storage | Events stored in SQLite on the daemon host. Run Detrix on a remote server, connect your agent in the morning and ask what happened overnight. Daemon auto-reconnects to the debug adapter if it restarts. |
| Extensible | New frontends via open API; new language support by implementing a language adapter — Adding Languages |
| Safety validation | Sensitive variable names (password, api_key, token, secret, private_key, etc.) blocked before capture. Configurable blacklist + whitelist for variable names and functions in detrix.toml. Enable safe mode per connection to allow only variable watching — no expression execution, no stack traces, no memory snapshots. Blocked operations return a clear named error so the agent can explain the constraint. |
| Auth | Bearer token auth (static or JWT/JWKS) — designed to run behind your reverse proxy |
| Event streaming | Forward captured events to Graylog |
| 4 API protocols | MCP (stdio), gRPC, REST, WebSocket |
Documentation
| Installation Guide | Install, language setup, agent config, cloud debugging |
| CLI Reference | Command-line interface |
| Clients Manual | Python, Go, Rust client libraries |
| Architecture | Clean Architecture with 13 Rust crates |
| Adding Languages | Extend Detrix to new languages |
Contributing
cargo fmt --all && cargo clippy --all -- -D warnings && cargo test --all
- Fork the repository
- Create a feature branch
- Run the checks above
- Submit a Pull Request
License
MIT License — see LICENSE.
Found a bug? Open an issue. Found in minutes what took you days? Tell us in Discussions.
Related Servers
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
LAML MCP Server
Loads and validates LAML (YAML-based markup language) documents via the Model Context Protocol.
Union - Unity MCP Server
An MCP server for managing and interacting with Unity projects.
Python Notebook MCP
Enables AI assistants to interact with local Jupyter notebooks (.ipynb).
Command-Line MCP Server
A secure MCP server for executing terminal commands with controlled directory access and command permissions.
TypeScript MCP
A TypeScript-specialized server providing advanced code manipulation and analysis capabilities.
FastMCP
A fast, Pythonic framework for building MCP servers and clients.
MCP Neurolora
An intelligent server for code analysis, collection, and documentation generation using the OpenAI API.
Tripo MCP Server
Generate 3D models with Tripo AI. Requires the external Tripo AI Blender Addon.
Tailkits UI
Tailwind Components with Native MCP Support
Plugged.in
A comprehensive proxy that combines multiple MCP servers into a single MCP. It provides discovery and management of tools, prompts, resources, and templates across servers, plus a playground for debugging when building MCP servers.