ToolRank

Score and optimize MCP tool definitions for AI agent discovery. Analyzes Findability, Clarity, Precision, and Efficiency.

ToolRank

The PageRank for AI agent tools.

Score, optimize, and monitor how AI agents discover and select your MCP tools.

ToolRank Score License: MIT Open Source

Score Your Tools → · Framework · Ranking · Blog


We scanned 4,162 MCP servers. Here's what we found.

MetricValue
Registered servers4,162
With tool definitions1,122 (27%)
Invisible to agents3,040 (73%)
Average score84.7/100
Selection advantage3.6x for optimized tools

73% of MCP servers are invisible to AI agents. They have no tool definitions, no descriptions, no schema. When an agent searches for tools, these servers don't exist.

Sources: arXiv 2602.14878, arXiv 2602.18914

What is ATO?

ATO (Agent Tool Optimization) is to the agent economy what SEO was to the search economy.

SEOLLMOATO
TargetSearch enginesLLM responsesAgent tool selection
TriggerHuman searchesHuman asks AIAgent acts autonomously
ResultA clickA mentionA transaction

LLMO is Stage 1 of ATO — necessary but not sufficient.

Quick Start

Score in browser

toolrank.dev/score — paste your tool JSON or enter your Smithery server name.

Score via CLI

npx @toolrank/mcp-server

Score in Python

from toolrank_score import score_server, format_report

result = score_server("my-server", tools)
print(format_report(result))

ToolRank Score

0-100 metric across four dimensions:

DimensionWeightWhat it measures
Findability25%Can agents discover you?
Clarity35%Can agents understand you?
Precision25%Is your schema precise?
Efficiency15%Are you token-efficient?

Maturity Levels

LevelScoreMeaning
Dominant85-100Agents prefer your tool
Preferred70-84Agents can use your tool well
Selectable50-69Agents might use your tool
Visible25-49Agents see you but rarely select
Absent0-24Agents can't find you

Before and After

- "name": "get",
- "description": "gets data from the api"
+ "name": "search_repositories",
+ "description": "Searches for GitHub repositories matching a query.
+   Useful for finding open-source projects or checking if a repo exists.
+   Returns name, description, stars, language, and URL.",
+ "inputSchema": {
+   "type": "object",
+   "properties": {
+     "query": { "type": "string", "description": "Search query" },
+     "sort": { "type": "string", "enum": ["stars", "forks", "updated"] }
+   },
+   "required": ["query"]
+ }

Score: 52 → 96. Five minutes of work. 3.6x selection advantage.

Architecture

toolrank/
├── packages/
│   ├── scoring/           # Level A engine (Python, zero-cost)
│   │   ├── toolrank_score.py    # 14 checks across 4 dimensions
│   │   ├── level_c_score.py     # Claude AI scoring (Pro)
│   │   └── weights.json         # Auto-calibrated weights
│   ├── scanner/           # Ecosystem scanner
│   │   ├── scanner_v3.py        # Weekly full / daily diff
│   │   ├── calibrate.py         # Weight auto-adjustment
│   │   └── auto_blog.py         # Daily article generation
│   ├── web/               # Astro site (toolrank.dev)
│   ├── mcp-server/        # ToolRank MCP Server
│   └── badge-worker/      # Dynamic badge SVG (CF Workers)
└── .github/workflows/     # Automated pipelines

Ecosystem Rankings

Updated weekly. Full ranking →

RankServerScore
1microsoft/learn_mcp96.5
2docfork/docfork96.5
3brave94.7
4LinkupPlatform/linkup-mcp-server93.5
5smithery-ai/national-weather-service93.3

Add Badge to Your README

[![ToolRank](https://toolrank.dev/badge/dominant.svg)](https://toolrank.dev/ranking)

Contributing

ToolRank is open source. The scoring logic is fully transparent and auditable.

Star this repo if you find ToolRank useful — it helps others discover it.

License

MIT


toolrank.dev · Built by @imhiroki

If SEO is about being found by search engines, ATO is about being used by AI agents.

Server Terkait