tachibot-mcp
Stop AI Hallucinations Before They Start Run models from OpenAI, Google, Anthropic, xAI, Perplexity, and OpenRouter in parallel. They check each other's work, debate solutions, and catch errors before you see them.
TachiBot MCP
Multi-Model AI Orchestration Platform
48 AI tools. 7 providers. One protocol.
Orchestrate Perplexity, Grok, GPT-5, Gemini, Qwen, Kimi K2.5, and MiniMax M2.1 from Claude Code, Claude Desktop, Cursor, or any MCP client.
Get Started · View Tools · Documentation
If TachiBot helps your workflow, a star goes a long way.
What's New in v2.14.7
Gemini Judge & Jury System
gemini_judge— Science-backed LLM-as-a-Judge (arXiv:2411.15594). 4 modes: synthesize, evaluate, rank, resolvejury— Multi-model jury panel. Configurable jurors (grok, openai, qwen, kimi, perplexity, minimax) run in parallel, Gemini synthesizes the verdict. Based on "Replacing Judges with Juries" (Cohere, arXiv:2404.18796)
Perplexity Model Fixes
- Fixed
sonar-promodel ID (was accidentally using lightweightsonar) perplexity_researchnow usessonar-deep-research— exhaustive multi-source reports in a single call
Qwen3-Coder-Next
qwen_coder now runs on Qwen3-Coder-Next (Feb 2026) — purpose-built for agentic coding:
| Before (qwen3-coder) | After (qwen3-coder-next) | |
|---|---|---|
| Params | 480B / ~35B active | 80B / 3B active |
| Context | 131K | 262K |
| SWE-Bench | 69.6% | >70% |
| Pricing | $0.22/$0.88 per M | $0.07/$0.30 per M |
3x cheaper, 2x context, better benchmarks. Falls back to legacy 480B on provider failure.
Kimi K2.5 Suite (4 tools)
| Tool | Capability | Highlight |
|---|---|---|
kimi_thinking | Step-by-step reasoning | Agent Swarm architecture |
kimi_code | Code generation & fixing | SWE-Bench 76.8% |
kimi_decompose | Task decomposition | Dependency graphs, parallel subtasks |
kimi_long_context | Document analysis | 256K context window |
MiniMax M2.1 (2 tools)
minimax_code— SWE tasks at very low cost (72.5% SWE-Bench)minimax_agent— Agentic workflows (77.2% τ²-Bench)
Qwen Reasoning
qwen_reason— Heavy reasoning with Qwen3-Max-Thinking (>1T params, 98% HMMT math)
Key Features
Multi-Model Intelligence
- 48 AI Tools across 7 providers — Perplexity, Grok, GPT-5, Gemini, Qwen, Kimi, MiniMax
- Multi-Model Council — planner_maker synthesizes plans from 5+ models
- Smart Routing — Automatic model selection for optimal results
- OpenRouter Gateway — Optional single API key for all providers
Advanced Workflows
- YAML-Based Workflows — Multi-step AI processes with dependency graphs
- Prompt Engineering — 14 research-backed techniques built-in
- Verification Checkpoints — 50% / 80% / 100% with automated quality scoring
- Parallel Execution — Run multiple models simultaneously
Tool Profiles
| Profile | Tools | Best For |
|---|---|---|
| Minimal | 12 | Quick tasks, low token budget |
| Research Power | 30 | Deep investigation, multi-source |
| Code Focus | 28 | Software development, SWE tasks |
| Balanced | 38 | General-purpose, mixed workflows |
| Heavy Coding (default) | 44 | Max code tools + agentic workflows |
| Full | 50 | Everything enabled |
Developer Experience
- Claude Code — First-class support
- Claude Desktop — Full integration
- Cursor — Works seamlessly
- TypeScript — Fully typed, extensible
Quick Start
Installation
npm install -g tachibot-mcp
Setup
Gateway Mode (Recommended) — 2 keys, all providers:
{
"mcpServers": {
"tachibot": {
"command": "tachibot",
"env": {
"OPENROUTER_API_KEY": "sk-or-xxx",
"PERPLEXITY_API_KEY": "pplx-xxx",
"USE_OPENROUTER_GATEWAY": "true"
}
}
}
}
Direct Mode — One key per provider:
{
"mcpServers": {
"tachibot": {
"command": "tachibot",
"env": {
"PERPLEXITY_API_KEY": "your-key",
"GROK_API_KEY": "your-key",
"OPENAI_API_KEY": "your-key",
"GOOGLE_API_KEY": "your-key",
"OPENROUTER_API_KEY": "your-key"
}
}
}
}
Get keys: OpenRouter | Perplexity
See Installation Guide for detailed instructions.
Tool Ecosystem (48 Tools)
Research & Search (6)
perplexity_ask · perplexity_research · perplexity_reason · grok_search · openai_search · gemini_search
Reasoning & Planning (8)
grok_reason · openai_reason · qwen_reason · kimi_thinking · kimi_decompose · planner_maker · planner_runner · list_plans
Code Intelligence (8)
kimi_code · grok_code · grok_debug · qwen_coder · qwen_algo · qwen_competitive · minimax_code · minimax_agent
Analysis & Brainstorming (9)
gemini_analyze_text · gemini_analyze_code · gemini_brainstorm · openai_brainstorm · openai_code_review · openai_explain · grok_brainstorm · grok_architect · kimi_long_context
Meta & Orchestration (5)
think · nextThought · focus · tachi · usage_stats
Workflows (9)
workflow · workflow_start · continue_workflow · list_workflows · create_workflow · visualize_workflow · workflow_status · validate_workflow · validate_workflow_file
Prompt Engineering (3)
list_prompt_techniques · preview_prompt_technique · execute_prompt_technique
Advanced Modes (bonus)
- Challenger — Critical analysis with multi-model fact-checking
- Verifier — Multi-model consensus verification
- Scout — Hybrid intelligence gathering
Example Usage
Multi-Model Planning
// Create a plan with multi-model council
planner_maker({ task: "Build a REST API with auth and tests", mode: "start" })
// → Grok searches → Qwen analyzes → Kimi decomposes → GPT critiques → Gemini synthesizes
// Execute with checkpoints
planner_runner({ plan: planContent, mode: "step", stepNum: 1 })
// → Automatic verification at 50%, 80% (kimi_decompose), and 100%
Task Decomposition
kimi_decompose({
task: "Migrate monolith to microservices",
depth: 3,
outputFormat: "dependencies"
})
// → Structured subtasks with IDs, parallel flags, acceptance criteria
Code Review
kimi_code({
task: "review",
code: "function processPayment(amount, card) { ... }",
language: "typescript"
})
// → SWE-Bench 76.8% quality analysis
Deep Reasoning
focus({
query: "Design a scalable event-driven architecture",
mode: "deep-reasoning",
models: ["grok", "gemini", "kimi"],
rounds: 5
})
Documentation
- Full Documentation
- Installation Guide
- Configuration
- Tools Reference
- Workflows Guide
- API Keys Guide
- Focus Modes
Setup Guides
Contributing
Contributions welcome! See CONTRIBUTING.md for guidelines.
Like what you see?
Star on GitHub — it helps more than you think.
AGPL-3.0 — see LICENSE for details.
Made with care by @byPawel
Multi-model AI orchestration, unified.
Related Servers
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Quantum Computation
Perform quantum computations using OpenAI and IBM Quantum APIs.
Futarchy MCP
Interact with the Futarchy protocol on the Solana blockchain.
Directus
This server enables AI assistants and other MCP clients to interact with Directus instances programmatically.
Rubber Duck MCP
A tool that applies rubber duck debugging techniques to AI development environments.
SkyDeckAI Code
A comprehensive toolkit for AI-driven development, offering file system operations, code analysis, execution, web searching, and system information retrieval.
BitFactory MCP
Simplifies and standardizes interactions with the BitFactory API.
Intervals.icu
Connects to the Intervals.icu API to retrieve activities, events, and wellness data.
EdgeOne Pages MCP
An MCP server and client implementation for EdgeOne Pages Functions, supporting OpenAI-formatted requests.
MCP Quickstart
A basic MCP server from the Quickstart Guide, adapted for OpenAI's Chat Completions API.
Local Code Indexing for Cursor
A Python-based server that locally indexes codebases using ChromaDB to provide semantic search for tools like Cursor.