What is Leeroopedia?
Your ML & AI Knowledge Wiki. Learnt by AI, built by AI, for AI. Expert-level knowledge across the full ML & AI stack: fine-tuning and distributed training, inference serving and GPU kernel optimization, building agents and RAG pipelines. 1000+ frameworks and libraries, all in one place. This MCP server turns your AI coding agent (Claude Code, Cursor, Claude Desktop, ChatGPT, OpenAI Codex, …) into an ML/AI expert engineer. Browse the full knowledge base at leeroopedia.com.Want to go end-to-end?
Leeroopedia gives your agent the knowledge. Kapso gives it the ability to act on it: research, experiment, and deploy. Together: a complete ML/AI engineer agent.Connect to Your Agents
Use our hosted server for zero-setup. Just paste this URL into any MCP client that supports remote servers:Claude Code
Set up with Claude Code
Cursor
Set up with Cursor
Claude Desktop
Set up with Claude Desktop
OpenAI Codex
Set up with OpenAI Codex
ChatGPT
Set up with ChatGPT
Benchmarks
We measured the effect of Leeroopedia MCP on real ML tasks:- ML Inference Optimization. Write CUDA/Triton kernels for 10 KernelBench problems. 2.11x geomean speedup vs 1.80x (+17%), with/without Leeroopedia MCP.
- LLM Post-Training. End-to-end SFT + DPO + LoRA merge + vLLM serving + IFEval on 8×A100. 21.3 vs 18.5 IFEval strict-prompt accuracy, 34.6 vs 30.9 strict-instruction accuracy, 272.7 vs 231.6 throughput.
- Self-Evolving RAG. Build a RAG service that automatically improves itself over multiple rounds. 45.16 vs 40.51 Precision@5, 40.32 vs 35.29 Recall@5, in 52 vs 62 min wall time.
- Customer Support Agent. Multi-agent triage system classifying 200 tickets into 27 intents. 98 vs 83 benchmark performance, 11s vs 61s per query.
See Full Benchmark Results
Detailed results, analysis, and replication instructions for all 4 benchmarks
Available Tools
The server provides 8 agentic tools: search, plan, review, verify, diagnose, hypothesize, query hyperparameters, and retrieve pages.Tools Overview
See all 8 tools with parameters and usage