openclaw-cost-tracker-mcp

MCP server for AI agent token-cost telemetry + quota-window awareness across Anthropic, OpenAI, Gemini, Ollama, AWS Bedrock. Per-agent attribution, spend-spike detection, 429-prediction tools.

openclaw-cost-tracker-mcp

MCP server for AI agent token-cost telemetry — built for the operator who woke up to a surprise bill. Catches the failure modes vendor dashboards miss: silent re-routing to API billing (HERMES.md bug, 1,464 upvotes), unattended /loop overnight burns ($6k in 26 hours, real story), per-agent attribution buried behind aggregate provider totals, multi-day dashboard lag that surfaces the spike after the money is gone. Cross-provider: Anthropic, OpenAI, Gemini, Ollama, AWS Bedrock. Per-agent + per-provider attribution, spend-spike anomaly detection (per-agent median × threshold), cheaper-routing recommendations with 30-day savings estimates, monthly forecast. Reads any cost-log JSONL with the standard {request_id, timestamp, provider, model, agent_id, prompt_tokens, completion_tokens, cost_usd} schema. OpenClaw operators get native ~/.openclaw/cost-logs/ parsing; other deployments wrap their provider calls with the included logging shim or commission a Custom MCP Build adapter. Keywords: Claude API cost surprise, /loop overnight burn, HERMES.md billing routing, dashboard lag, AI agent FinOps, LLM cost attribution, token spend monitoring.

Status: v1.1.3 Tests: 119 passing License: MIT MCP PyPI


What it does

Production AI deployments are leaking money in ways the vendor dashboards don't surface in time. Real stories from the last 30 days on r/ClaudeAI:

  • A developer woke up to a $6,000 overnight burn from a single /loop command running 26 hours unattended on Claude Opus 4.7 (1,175 upvotes, 314 comments, May 1 2026). The Anthropic dashboard lagged by days; the spike was invisible until the limit email landed. Verbatim from the OP: "By the time it shows a spike, the money is already spent."
  • Another lost $200 because a string in a git commit silently routed Claude Code billing from Max subscription to API tier — the HERMES.md bug (1,464 upvotes, 202 comments). VENDOR-ACKNOWLEDGED: Om Patel @om_patel5 thread on X (1.4M views, 4.6K likes) elicited an Anthropic-side reply confirming "this was a bug with the 3rd party harness detection [and how we pull git status into the system prompt]" — i.e., it's a real bug in how Anthropic detects 3rd-party harnesses + ingests git context, not an isolated user error. Cost-tracker is the defense layer regardless of patch timing.
  • A third left a test loop running and woke up to a surprise $80 Claude bill. Verbatim: "No alerts. No cap. No warning. Just a bill."
  • A fourth got $1,800 in API charges in two days after the built-in claude-code-guide agent recommended a command that bypasses Max-plan billing.

In every case the dashboard lagged, the per-agent attribution was invisible, and Anthropic's own hard-cap mechanism either failed or arrived too late. This MCP server surfaces per-agent + per-provider cost attribution live, queryable from inside Claude or any MCP-aware client, before the bill arrives:

> claude: where did our LLM spend go this week?
[MCP tools: cost_overview + top_cost_drivers]

Total spend last 7 days: $42.18
By provider:
  Anthropic    $30.40 (72%) — claude-sonnet-4 dominates
  OpenAI       $9.20  (22%) — chat-bot agent
  Gemini       $1.86  (4%)  — cron-summarizer (cheap-route working)
  Ollama       $0.00  (local, free)

Top 3 cost drivers:
  data-extraction-agent      $28.50 (68%)
  chat-bot                   $9.20  (22%)
  cron-summarizer            $1.86  (4%)

1 anomaly flagged — see find_cost_anomalies for details.
> claude: any cheaper-routing opportunities?
[MCP tool: model_routing_recommendations]

Recommendation: data-extraction-agent currently runs claude-sonnet-4
with avg 400-token completions — extraction-style work that
gemini-2.5-flash usually handles at ~95% quality for ~5% the cost.
Estimated savings: $27.10/30d if migrated. Test on a 10% slice first.

v1.1 — quota-window awareness. The cost tracker now also answers "will my agent hit a 429 before the window resets?" before it actually does. This is the HERMES.md / overnight-burn detector — projecting per-dimension exhaustion ETA from your active burn rate vs the rate-limit headers your provider already returns:

> claude: am I about to 429?
[MCP tool: predict_429_in_window]

severity: CRITICAL
provider: anthropic
burn_rate_tokens_per_min: 8000
dimension_predictions:
  tokens          : will 429 in ~6.5 min  (resets in 22 min) ⚠
  output_tokens   : will 429 in ~11.7 min (resets in 22 min) ⚠
  input_tokens    : safe through reset (headroom 70k at reset)
  requests        : safe through reset (headroom 858 at reset)

summary: tokens projected to 429 in ~6.5 min at current burn (8000/min).
Window resets in 22 min — throttle now.
> claude: what burn rate keeps me safe?
[MCP tool: recommend_throttle_target target_buffer_pct=10]

target_buffer_pct: 10
targets:
  tokens          : current 8000/min, max safe 545/min — must throttle
  output_tokens   : current 3000/min, max safe 1136/min — must throttle
  input_tokens    : current 5000/min, max safe 6818/min — safe
  requests        : current 1/min,    max safe 35/min   — safe

summary: throttle required on tokens, output_tokens to keep 10% buffer at reset.

Why openclaw-cost-tracker-mcp

Four things existing tools (provider billing dashboards, generic FinOps tools, custom scripts, even paid SaaS like Lava or AgentShield) don't do well together:

  1. Per-agent attribution, not just per-provider totals. Provider dashboards show "$X spent on Anthropic" — they can't tell you which of your six agents drove 78% of that. Cost tracker reads per-request cost-log JSONL and aggregates with the agent_id intact.

  2. Cost-spike anomaly detection per agent. A single 120k-token paste into chat — or an unattended /loop running overnight — costs more than a week of normal traffic. The default 3x-median-per-agent threshold flags those before they show up in the month-end bill. Catches the same shape as the $6k overnight thread and the silent re-routing patterns that vendor dashboards miss.

  3. Routing recommendations grounded in actual usage. Generic "use cheaper models" advice is useless. This server identifies specific agents whose volume + completion-length pattern suggests a cheaper provider would deliver the same outcome, with concrete 30-day savings estimates.

  4. Local-only, MCP-native, no traffic interception. Lava + similar paid gateways sit between your app and the provider — your traffic routes through them. AgentShield is a callback for LangChain/CrewAI/OpenAI SDK, not MCP. Cost-tracker is read-only: it parses your existing cost logs and surfaces them via MCP tools your Claude conversation can query directly. No proxy, no traffic routing, no subscription, no vendor lock-in.

Built for the production-AI operator running real workloads with real spend that matters — whether on OpenClaw, Claude Code with claude -p, raw Anthropic API, OpenAI Assistants, or any combination.


Tool surface

ToolWhat it returns
cost_overviewTotal spend + by-provider + top agents + top models + anomaly count for a window
costs_by_agentPer-agent breakdown with avg-cost-per-request + share of total
costs_by_providerPer-provider breakdown with token counts
find_cost_anomaliesRequests flagged as 3x+ above their agent's median cost
top_cost_driversTop N spending agents + models, no other noise
model_routing_recommendationsSpecific cheaper-model suggestions with 30d savings estimates
forecast_monthly_costProjects 30-day total + per-provider with confidence note
current_quota_usage (v1.1)Per-dimension rate-limit utilization + most-pressured-dimension headline
predict_429_in_window (v1.1)Projects per-dimension 429 ETA from recent burn rate. CRITICAL/WARNING/INFO severity
recommend_throttle_target (v1.1)Max safe burn rate per dimension to land at target_buffer_pct headroom at reset

Resources:

  • cost://overview — 7-day snapshot
  • cost://forecast — 30-day projection
  • cost://anomalies — recent flagged anomalies
  • cost://quota (v1.1) — current quota-state snapshot

Prompts:

  • diagnose-cost-spike — walk a recent spike to its root cause + corrective action
  • weekly-cost-digest — 200-word weekly cost digest
  • diagnose-quota-pressure (v1.1) — walk current quota + burn rate, recommend a specific throttle action

Quickstart

Install

pip install openclaw-cost-tracker-mcp

Quick verify (~30 seconds, no config)

After install, run the bundled demo to see the four representative cost analyses fire against the mock backend:

openclaw-cost-tracker-mcp-demo

You'll see: total spend $3 across 4 providers (anthropic / openai / gemini / ollama), one critical 24.6× spend anomaly on chat-bot, two cheaper-routing recommendations ($2.61/30d savings), and an Anthropic quota snapshot showing tokens at 87% with 22 minutes until reset. No external I/O, no API keys — safe to run anywhere. Useful first-30-seconds check before pointing at your real cost-log JSONL.

Configure for Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "openclaw-cost": {
      "command": "python",
      "args": ["-m", "openclaw_cost_tracker_mcp"],
      "env": {
        "OPENCLAW_COST_BACKEND": "mock"
      }
    }
  }
}

Backends

BackendStatusDescription
mock✅ v1.0Sample data with deliberate anomalies + routing opportunities for protocol verification. v1.1: includes near-exhausted token-quota snapshot + recent loop-burst entries so predict_429_in_window returns CRITICAL out of the box
openclaw-jsonl✅ v1.0Parses OpenClaw's native cost-log JSONL files (default ~/.openclaw/cost-logs/, configurable via OPENCLAW_COST_LOGS). v1.1: also parses optional quota-snapshots.jsonl in the same dir (one record per provider response, schema in backends/openclaw_jsonl.py docstring)
provider-direct⏳ v1.2Reads Anthropic + OpenAI billing APIs directly (no log shim required)

JSONL log format

Each line is one JSON record:

{"request_id":"req-abc123","timestamp":"2026-05-04T12:34:56Z","provider":"anthropic","model":"claude-sonnet-4","agent_id":"data-extraction-agent","skill_id":"extract-structured-data","prompt_tokens":8500,"completion_tokens":600,"total_tokens":9100,"cost_usd":0.0345,"duration_ms":4823}

If your OpenClaw deployment doesn't emit this format, wrap your provider calls with a small logging shim — sample shim in examples/.


Roadmap

VersionScopeStatus
v1.0mock + openclaw-jsonl backends, 7 tools / 3 resources / 2 prompts, anomaly detection + routing + forecast, GitHub Actions CI matrix, PyPI Trusted Publishing
v1.1Quota-window awarenessQuotaSnapshot data model, get_latest_quota_state() backend method, 3 new tools (current_quota_usage, predict_429_in_window, recommend_throttle_target), cost://quota resource, diagnose-quota-pressure prompt. Reads anthropic-ratelimit-* headers (or equivalents) and projects per-dimension 429 ETA from recent burn rate. Folded in from research-pass-2 P01 candidate after incumbent validation against Lava + AgentShield + Nornr
v1.2provider-direct backend (Anthropic + OpenAI billing API integrations); backend federation; budget alerts + threshold breach detection; /loop overnight-burn detector (folded from P05 watch-list candidate)
v1.xPer-channel cost attribution; webhook emitter for budget + quota alerts

Need this adapted to your stack?

If your AI deployment doesn't use OpenClaw's cost-log format — different agent harness, custom logging, AWS Bedrock metering, vendor billing API — and you want the same attribution + anomaly + routing visibility, that's a Custom MCP Build engagement.

TierScopeInvestmentTimeline
SimpleSingle backend adapter for your existing cost-data source$8,000–$10,0001–2 weeks
StandardCustom backend + custom anomaly rules + integration with your alerting$15,000–$20,0002–4 weeks
ComplexMulti-backend federation + budget enforcement + custom routing logic$25,000–$35,0004–8 weeks

To engage:

  1. Email [email protected] with subject Custom MCP Build inquiry
  2. Include: a 1-paragraph description of your stack + which tier you're considering
  3. Reply within 2 business days with a 30-min discovery call slot

This server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp (cron silent-failure detection) and openclaw-health-mcp (deployment health). Install all three for full operational visibility.


Production AI audits

If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (cost overruns being one of the most common), and write the corrective-action plan:

TierScopeInvestmentTimeline
Audit LiteOne system, top-5 findings, written report$1,5001 week
Audit StandardFull audit, all 14 patterns, 5 Cs findings, 90-day follow-up$3,0002–3 weeks
Audit + WorkshopStandard audit + 2-day team workshop + first monthly audit included$7,5003–4 weeks

Same email channel: [email protected] with subject AI audit inquiry.


Contributing

PRs welcome. Backends are pluggable — see src/openclaw_cost_tracker_mcp/backends/ for the contract.

To add a new backend:

  1. Subclass CostBackend in backends/<your_backend>.py
  2. Implement get_entries() (cost telemetry); optionally override get_latest_quota_state() if your data source carries rate-limit headers (default returns None for graceful degrade)
  3. Register in backends/__init__.py
  4. Add tests in tests/test_backend_<your_backend>.py

Bug reports + feature requests: open a GitHub issue.


License

MIT — see LICENSE.


Related


Built by Temur Khan — production AI engineer. Contact: [email protected]

Verwandte Server

NotebookLM Web Importer

Importieren Sie Webseiten und YouTube-Videos mit einem Klick in NotebookLM. Vertraut von über 200.000 Nutzern.

Chrome-Erweiterung installieren