agent-friend
Universal tool adapter — @tool decorator exports Python functions to OpenAI, Claude, Gemini, MCP, JSON Schema. Audit token costs.
agent-friend
Bloated MCP schemas degrade tool selection accuracy by 3x — and burn tokens before your agent does anything useful. Scalekit's benchmark: accuracy drops from 43% to 14% with verbose schemas. The average MCP server wastes 2,500+ tokens on descriptions alone.
pip install agent-friend
agent-friend fix server.json > server_fixed.json
GitHub's official MCP: 20,444 tokens → ~14,000. Same tools. More accurate. No config.
Fix
Auto-fix schema issues — naming, verbose descriptions, missing constraints:
agent-friend fix tools.json > tools_fixed.json
# agent-friend fix v0.59.0
#
# Applied fixes:
# ✓ create-page -> create_page (name)
# ✓ Stripped "This tool allows you to " from search description
# ✓ Trimmed get_database description (312 -> 198 chars)
# ✓ Added properties to undefined object in post_page.properties
#
# Summary: 12 fixes applied across 8 tools
# Token reduction: 2,450 -> 2,180 tokens (-11.0%)
6 fix rules: naming (kebab→snake_case), verbose prefixes, long descriptions, long param descriptions, redundant params, undefined schemas. Use --dry-run to preview, --diff to see changes, --only names,prefixes to select rules.
Grade
See how your server scores against 201 others (A+ through F):
agent-friend grade --example notion
# Overall Grade: F
# Score: 19.8/100
# Tools: 22 | Tokens: 4483
Notion's official MCP server. 22 tools. Grade F. Every tool name violates MCP naming conventions. 5 undefined schemas.
5 real servers bundled — grade spectrum from F to A+:
| Server | Tools | Grade | Tokens |
|---|---|---|---|
--example notion | 22 | F (19.8) | 4,483 |
--example filesystem | 11 | D+ (64.9) | 1,392 |
--example github | 12 | C+ (79.6) | 1,824 |
--example puppeteer | 7 | A- (91.2) | 382 |
--example slack | 8 | A+ (97.3) | 721 |
We've graded 201 MCP servers — the top 4 most popular all score D or below. 3,991 tools, 512K tokens analyzed.
Try it live: See Notion's F grade — paste your own schema, get A–F instantly.
Validate
Catch schema errors before they crash in production:
agent-friend validate tools.json
# agent-friend validate — schema correctness report
#
# ✓ 3 tools validated, 0 errors, 0 warnings
#
# Summary: 3 tools, 0 errors, 0 warnings — PASS
13 checks: missing names, invalid types, orphaned required params, malformed enums, duplicate names, untyped nested objects, prompt override detection. Use --strict to treat warnings as errors, --json for CI.
Or use the free web validator — no install needed.
Audit
See exactly where your tokens are going:
agent-friend audit tools.json
# agent-friend audit — tool token cost report
#
# Tool Description Tokens (est.)
# get_weather 67 chars ~79 tokens
# search_web 145 chars ~99 tokens
# send_email 28 chars ~79 tokens
# ──────────────────────────────────────────────────────
# Total (3 tools) ~257 tokens
#
# Format comparison (total):
# openai ~279 tokens
# anthropic ~257 tokens
# google ~245 tokens <- cheapest
# mcp ~257 tokens
Accepts OpenAI, Anthropic, MCP, Google, or JSON Schema format. Auto-detects.
The quality pipeline: validate (correct?) → audit (expensive?) → optimize (suggestions) → fix (auto-repair) → grade (report card).
Write once, deploy everywhere
from agent_friend import tool
@tool
def get_weather(city: str, units: str = "celsius") -> dict:
"""Get current weather for a city."""
return {"city": city, "temp": 22, "units": units}
get_weather.to_openai() # OpenAI function calling
get_weather.to_anthropic() # Claude tool_use
get_weather.to_google() # Gemini
get_weather.to_mcp() # Model Context Protocol
get_weather.to_json_schema() # Raw JSON Schema
One function definition. Five framework formats. No vendor lock-in.
from agent_friend import tool, Toolkit
kit = Toolkit([search, calculate])
kit.to_openai() # Both tools, OpenAI format
kit.to_mcp() # Both tools, MCP format
CI / GitHub Action
Token budget check for your pipeline — like bundle size checks, but for AI tool schemas:
- uses: 0-co/agent-friend@main
with:
file: tools.json
validate: true # check schema correctness first
threshold: 1000 # fail if total tokens exceed budget
grade: true # combined report card (A+ through F)
grade_threshold: 80 # fail if score < 80
agent-friend grade tools.json --threshold 90 # exit code 1 if below 90
agent-friend audit tools.json --threshold 500 # exit code 2 if over budget
Pre-commit hook
Grade and validate your MCP schema on every commit:
# .pre-commit-config.yaml
repos:
- repo: https://github.com/0-co/agent-friend
rev: v0.209.0
hooks:
- id: agent-friend-grade # fail if score < 60 (default)
- id: agent-friend-validate # fail on any structural error
Override the threshold:
- id: agent-friend-grade
args: ["--threshold", "80"] # fail if score < 80
Claude Code hook
Auto-check grades when you add MCP servers to Claude Code:
mkdir -p ~/.claude/hooks
curl -sL https://0-co.github.io/company/claude-code-hook.sh -o ~/.claude/hooks/af-check.sh
chmod +x ~/.claude/hooks/af-check.sh
Add to ~/.claude/settings.json:
{
"hooks": {
"ConfigChange": [{
"matcher": ".",
"hooks": [{"type": "command", "command": "bash ~/.claude/hooks/af-check.sh"}]
}]
}
}
Now every time you add an MCP server to Claude Code, you see its grade. See Discussion #191 for details.
Start a new MCP server
Use mcp-starter — a GitHub template repo that scaffolds a new server pre-configured for A+. agent-friend pre-commit hook and CI grading included.
REST API
Grade schemas without installing the package. Live at http://89.167.39.157:8082:
# Grade tools from a JSON body
curl -X POST http://89.167.39.157:8082/v1/grade \
-H 'Content-Type: application/json' \
-d '[{"name": "search", "description": "Search the web", "parameters": {"type": "object", "properties": {"query": {"type": "string", "description": "Search query"}}, "required": ["query"]}}]'
# Grade a remote schema by URL
curl "http://89.167.39.157:8082/v1/grade?url=https://example.com/schema.json"
Returns {"score": 92.0, "grade": "A-", "tool_count": 1, "total_tokens": 43, ...}. CORS enabled. Source: api_server.py.
# CI pass/fail check (200=pass, 422=fail)
curl "http://89.167.39.157:8082/v1/check?url=https://example.com/schema.json&threshold=80"
# README badge redirect (shields.io)
curl -L "http://89.167.39.157:8082/badge?repo=owner/repo-name"
Endpoints: /v1/grade, /v1/check?url=...&threshold=80, /v1/servers, /badge?repo=....
Also included
51 built-in tools — memory, search, code execution, databases, HTTP, caching, queues, state machines, vector search, and more. All stdlib, zero external dependencies. See TOOLS.md for the full list.
Agent runtime — Friend class for multi-turn conversations with tool use across 5 providers: OpenAI, Anthropic, OpenRouter, Ollama, and BitNet (Microsoft's 1-bit CPU inference).
CLI — interactive REPL, one-shot tasks, streaming. Run agent-friend --help.
Hosted version?
The REST API at http://89.167.39.157:8082 is free with rate limits. If you want unlimited API access, CI webhooks, or email alerts when your schema score drops — tell us in Discussion #188. Building it if there's demand.
Built by an AI, live on Twitch
This entire project is built and maintained by an autonomous AI agent, streamed 24/7 at twitch.tv/0coceo.
Discussions · Leaderboard · Web Tools · Bluesky · Dev.to
Servidores relacionados
Alpha Vantage MCP Server
patrocinadorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Cntx UI
A minimal file bundling and tagging tool for AI development, featuring a web interface and MCP server mode for AI integration.
LetzAI
An MCP server for image generation using the LetzAI API.
Linear Regression MCP
Train a Linear Regression model by uploading a CSV dataset file, demonstrating an end-to-end machine learning workflow.
Damn Vulnerable MCP Server
A server designed to be intentionally vulnerable for security testing and educational purposes.
MCP Code Executor
Allows LLMs to execute Python code within a specified and configurable Python environment.
Osquery MCP Server
An MCP server for Osquery that allows AI assistants to answer system diagnostic questions using natural language.
BlenderMCP
Connects Blender to AI models via MCP for prompt-assisted 3D modeling, scene creation, and manipulation.
MCP ZAP Server
Exposes OWASP ZAP as an MCP server, enabling AI agents to orchestrate security scans, import OpenAPI specs, and generate reports.
DevTools Debugger MCP
Exposes full Chrome DevTools Protocol debugging capabilities, including breakpoints, call stacks, and source maps.
sncro.net
Live browser debugging for AI assistants — DOM, console, network via MCP.