Metrx MCP Server

Track AI agent costs, detect waste, optimize models, and prove ROI. 23 MCP tools for LLM cost tracking, provider arbitrage, budget enforcement, and revenue attribution.

Metrx MCP Server

npm version CI License: MIT Smithery Glama

Your AI agents are wasting money. Metrx finds out how much, and fixes it.

The official MCP server for Metrx — the AI Agent Cost Intelligence Platform. Give any MCP-compatible agent (Claude, GPT, Gemini, Cursor, Windsurf) the ability to track its own costs, detect waste, optimize model selection, and prove ROI.

Why Metrx?

ProblemWhat Metrx Does
No visibility into agent spendReal-time cost dashboards per agent, model, and provider
Overpaying for LLM callsProvider arbitrage finds cheaper models for the same task
Runaway costsBudget enforcement with auto-pause when limits are hit
Wasted tokensCost leak scanner detects retry storms, context bloat, model mismatch
Can't prove AI ROIRevenue attribution links agent actions to business outcomes

Quick Start

Try it now — no signup required

npx @metrxbot/mcp-server --demo

This starts the server with sample data so you can explore all 23 tools instantly.

Connect your real data

Option A — Interactive login (recommended):

npx @metrxbot/mcp-server --auth

Opens your browser to get an API key, validates it, and saves it to ~/.metrxrc so you never need to set env vars.

Option B — Environment variable:

METRX_API_KEY=sk_live_your_key_here npx @metrxbot/mcp-server --test

Get your free API key at app.metrxbot.com/sign-up.

Add to your MCP client (Claude Desktop, Cursor, Windsurf)

If you used --auth, no env block is needed — the key is read from ~/.metrxrc automatically:

{
  "mcpServers": {
    "metrx": {
      "command": "npx",
      "args": ["@metrxbot/mcp-server"]
    }
  }
}

Or pass the key explicitly via environment:

{
  "mcpServers": {
    "metrx": {
      "command": "npx",
      "args": ["@metrxbot/mcp-server"],
      "env": {
        "METRX_API_KEY": "sk_live_your_key_here"
      }
    }
  }
}

Remote HTTP endpoint

For remote agents (no local install needed):

POST https://metrxbot.com/api/mcp
Authorization: Bearer sk_live_your_key_here
Content-Type: application/json

From npm

npm install @metrxbot/mcp-server

23 Tools Across 10 Domains

Dashboard (3 tools)

ToolDescription
metrx_get_cost_summaryComprehensive cost summary — total spend, call counts, error rates, and optimization opportunities
metrx_list_agentsList all agents with status, category, cost metrics, and health indicators
metrx_get_agent_detailDetailed agent info including model, framework, cost breakdown, and performance history

Optimization (4 tools)

ToolDescription
metrx_get_optimization_recommendationsAI-powered cost optimization recommendations per agent or fleet-wide
metrx_apply_optimizationOne-click apply an optimization recommendation to an agent
metrx_route_modelModel routing recommendation for a specific task based on complexity
metrx_compare_modelsCompare LLM model pricing and capabilities across providers

Budgets (3 tools)

ToolDescription
metrx_get_budget_statusCurrent status of all budget configurations with spend vs. limits
metrx_set_budgetCreate or update a budget with hard, soft, or monitor enforcement
metrx_update_budget_modeChange enforcement mode of an existing budget or pause/resume it

Alerts (3 tools)

ToolDescription
metrx_get_alertsActive alerts and notifications for your agent fleet
metrx_acknowledge_alertMark one or more alerts as read/acknowledged
metrx_get_failure_predictionsPredictive failure analysis — identify agents likely to fail before it happens

Experiments (3 tools)

ToolDescription
metrx_create_model_experimentStart an A/B test comparing two LLM models with traffic splitting
metrx_get_experiment_resultsStatistical significance, cost delta, and recommended action
metrx_stop_experimentStop a running model routing experiment and lock in the winner

Cost Leak Detector (1 tool)

ToolDescription
metrx_run_cost_leak_scanComprehensive 7-check cost leak audit across your entire agent fleet

Attribution (3 tools)

ToolDescription
metrx_attribute_taskLink agent actions to business outcomes for ROI tracking
metrx_get_task_roiCalculate return on investment for an agent — costs vs. attributed outcomes
metrx_get_attribution_reportMulti-source attribution report with confidence scores and top contributors

Alert Configuration (1 tool)

ToolDescription
metrx_configure_alert_thresholdSet cost or operational alert thresholds with email, webhook, or auto-pause

ROI Audit (1 tool)

ToolDescription
metrx_generate_roi_auditBoard-ready ROI audit report for your AI agent fleet

Upgrade Justification (1 tool)

ToolDescription
metrx_get_upgrade_justificationROI report for tier upgrades based on current usage patterns

Prompts

Pre-built prompt templates for common workflows:

PromptDescription
analyze-costsComprehensive cost overview — spend breakdown, top agents, optimization opportunities
find-savingsDiscover optimization opportunities — model downgrades, caching, routing
cost-leak-scanScan for waste patterns — retry storms, oversized contexts, model mismatch

Examples

"How much am I spending?"

User: What was my AI cost this week?
→ metrx_get_cost_summary(period_days=7)

Total Spend: $234.56 | Calls: 2,450 | Error Rate: 0.2%
├── customer-support: $156.23 (1,800 calls)
└── code-generator: $78.33 (650 calls)

💡 Switch customer-support from GPT-4 to Claude Sonnet: Save $42/week

"Find me savings"

User: Am I overpaying for my agents?
→ metrx_compare_models(models=["gpt-4o", "claude-3-5-sonnet", "gemini-1.5-pro"])

Model Comparison (per 1M tokens):
├── gpt-4o: $2.50 in / $10.00 out
├── claude-3-5-sonnet: $3.00 in / $15.00 out
└── gemini-1.5-pro: $3.50 in / $10.50 out

"Test a cheaper model"

User: Test Claude 3.5 Sonnet against my GPT-4 setup
→ metrx_create_model_experiment(agent_id="agent_123",
    model_a="gpt-4o", model_b="claude-3-5-sonnet-20241022", traffic_split=10)

Experiment started: 90% GPT-4o, 10% Claude 3.5 Sonnet
Check back in 14 days for statistical significance.

Companion Tool: Cost Leak Detector

This repo also includes @metrxbot/cost-leak-detector — a free, offline CLI that scans your LLM API logs for wasted spend. No signup, no cloud, no data leaves your machine.

npx @metrxbot/cost-leak-detector demo

It runs 7 checks (idle agents, premium model overuse, missing caching, high error rates, context overflow, no budgets, arbitrage opportunities) and gives you a scored report in seconds. See the full docs.

Configuration

API Key (required)

The server looks for your API key in this order:

  1. METRX_API_KEY environment variable
  2. ~/.metrxrc file (created by --auth)

Run npx @metrxbot/mcp-server --auth to save your key, or set the env var directly.

VariableRequiredDescription
METRX_API_KEYYes*Your Metrx API key (get one free)
METRX_API_URLNoOverride API base URL (default: https://metrxbot.com/api/v1)

*Not required if you've run --auth — the key is read from ~/.metrxrc automatically.

CLI Flags

FlagDescription
--demoStart with sample data — no API key or signup needed
--authInteractive login — opens browser, validates key, saves to ~/.metrxrc
--testVerify your API key and connection

Rate Limiting

60 requests per minute per tool. For higher limits, contact [email protected].

Development

git clone https://github.com/metrxbots/mcp-server.git
cd mcp-server
npm install
npm run typecheck
npm test

Contributing

See CONTRIBUTING.md for guidelines.

Links

A Note on Naming

The product is Metrx (metrxbot.com). The npm scope is @metrxbot and the Smithery listing is metrxbot/mcp-server. The GitHub organization is metrxbots (with an s) because metrxbot was already taken on GitHub. If you see metrxbot vs metrxbots across platforms, they're the same project — just a GitHub namespace constraint.

License

MIT — see LICENSE.

💬 Feedback

Did Metrx work for you? We'd love to hear it — good or bad.

If you installed but hit a snag, tell us what happened — we read every report.

Related Servers