Leeroopedia
The Brain that turns Generalist Agents into ML Experts.
Documentation Index
Fetch the complete documentation index at: https://docs.leeroopedia.com/llms.txt Use this file to discover all available pages before exploring further.
Leeroopedia MCP
Give your AI coding agent access to curated ML/AI knowledge
$20 free credit on sign-up. That's plenty of searches, plans, and diagnoses. Skip the guesswork on your next fine-tuning run or inference deployment. No credit card required. Get your API key →
What is Leeroopedia?
Your ML & AI Knowledge Wiki. Learnt by AI, built by AI, for AI.
Expert-level knowledge across the full ML & AI stack: fine-tuning and distributed training, inference serving and GPU kernel optimization, building agents and RAG pipelines. 1000+ frameworks and libraries, all in one place.
This MCP server turns your AI coding agent (Claude Code, Cursor, Claude Desktop, ChatGPT, OpenAI Codex, ...) into an ML/AI expert engineer.
Browse the full knowledge base at leeroopedia.com.
Want to go end-to-end?
Leeroopedia gives your agent the knowledge. Kapso gives it the ability to act on it: research, experiment, and deploy. Together: a complete ML/AI engineer agent.
Connect to Your Agents
Use our hosted server for zero-setup. Just paste this URL into any MCP client that supports remote servers:
https://mcp.leeroopedia.com/mcp?token=kpsk_your_key_here
Or see the per-client guides below for detailed instructions (including local setup).
Set up with Claude Code Set up with Cursor Set up with Claude Desktop Set up with OpenAI Codex Set up with ChatGPTBenchmarks
We measured the effect of Leeroopedia MCP on real ML tasks:
-
ML Inference Optimization. Write CUDA/Triton kernels for 10 KernelBench problems. 2.11x geomean speedup vs 1.80x (+17%), with/without Leeroopedia MCP.
-
LLM Post-Training. End-to-end SFT + DPO + LoRA merge + vLLM serving + IFEval on 8×A100. 21.3 vs 18.5 IFEval strict-prompt accuracy, 34.6 vs 30.9 strict-instruction accuracy, 272.7 vs 231.6 throughput.
-
Self-Evolving RAG. Build a RAG service that automatically improves itself over multiple rounds. 45.16 vs 40.51 Precision@5, 40.32 vs 35.29 Recall@5, in 52 vs 62 min wall time.
-
Customer Support Agent. Multi-agent triage system classifying 200 tickets into 27 intents. 98 vs 83 benchmark performance, 11s vs 61s per query.
Available Tools
The server provides 8 agentic tools: search, plan, review, verify, diagnose, hypothesize, query hyperparameters, and retrieve pages.
See all 8 tools with parameters and usageQuick Links
Connect in 2 minutes Connect in 2 minutes Connect in 2 minutes Connect in 2 minutes Connect in 2 minutes See the results All 8 tools explained相关服务器
Alpha Vantage MCP Server
赞助Access financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Chart
A Model Context Protocol server for generating visual charts using AntV.
MCP Toolbox
A toolkit for enhancing LLM capabilities by providing tools to interact with external services and APIs via the Model Context Protocol (MCP).
Kafka MCP
A natural language interface to manage Apache Kafka operations.
MCP Proxy Hub
Aggregates multiple MCP resource servers into a single interface using a JSON configuration file.
Bifrost
Exposes VSCode's development tools and language features to AI tools through an MCP server.
MCP QEMU VM Control
Give your AI full computer access — safely. Let Claude (or any MCP-compatible LLM) see your screen, move the mouse, type on the keyboard, and run commands — all inside an isolated QEMU virtual machine. Perfect for AI-driven automation, testing, and computer-use experiments without risking your host system.
Substrate MCP Server
A Model Context Protocol (MCP) server for Substrate blockchains, written in Rust.
@4da/mcp-server
Dependency intelligence for AI agents. CVE scanning, health checks, upgrade planning.
OpenOcean Finance
An MCP server for executing token swaps across multiple decentralized exchanges using OpenOcean's aggregation API
agency-mcp-server
On-demand access to 150+ specialist AI agent templates — search, browse, and spawn agents. 150x reduction in context usage vs loading agents locally.