AgentDesk MCP
Adversarial AI quality review for LLM pipelines. Dual-reviewer consensus with anti-gaming protection. BYOK — works with Claude Code, Claude Desktop, and any MCP client.
AgentDesk MCP — Adversarial AI Review
Quality control for AI pipelines — one MCP tool. Works with Claude Code, Claude Desktop, and any MCP client.
29.5% of teams do NO evaluation of AI outputs. (LangChain Survey) Knowledge workers spend 4.3 hours/week fact-checking AI outputs. (Microsoft 2025)
AgentDesk MCP fixes this. Add independent adversarial review to any AI pipeline in 30 seconds.
Quick Start
npm (recommended)
npx agentdesk-mcp
Claude Code
claude mcp add agentdesk-mcp -- npx agentdesk-mcp
Claude Desktop
{
"mcpServers": {
"agentdesk-mcp": {
"command": "npx",
"args": ["-y", "agentdesk-mcp"],
"env": { "ANTHROPIC_API_KEY": "sk-ant-..." }
}
}
}
Install from GitHub (alternative)
npm install github:Rih0z/agentdesk-mcp
Requirements
ANTHROPIC_API_KEYenvironment variable (uses your own key — BYOK)
Tools
review_output
Adversarial quality review of any AI-generated output. An independent reviewer assumes the author made mistakes and actively looks for problems.
Input:
| Parameter | Required | Description |
|---|---|---|
output | Yes | The AI-generated output to review |
criteria | No | Custom review criteria |
review_type | No | Category: code, content, factual, translation, etc. |
model | No | Reviewer model (default: claude-sonnet-4-6) |
Output:
{
"verdict": "PASS | FAIL | CONDITIONAL_PASS",
"score": 82,
"issues": [
{
"severity": "high",
"category": "accuracy",
"description": "Claim about X is unsupported",
"suggestion": "Add citation or remove claim"
}
],
"checklist": [
{
"item": "Factual accuracy",
"status": "pass",
"evidence": "All statistics match cited sources"
}
],
"summary": "Overall assessment...",
"reviewer_model": "claude-sonnet-4-6"
}
review_dual
Dual adversarial review — two independent reviewers assess the output from different angles, then a merge agent combines findings.
- If either reviewer finds a critical issue → merged verdict is FAIL
- Takes the lower score
- Combines and deduplicates all issues
Use for high-stakes outputs where quality is critical.
Same parameters as review_output.
How It Works
- Adversarial prompting: The reviewer is instructed to assume mistakes were made. No benefit of the doubt.
- Evidence-based checklist: Every PASS item requires specific evidence. Items without evidence are automatically downgraded to FAIL.
- Anti-gaming validation: If >30% of checklist items lack evidence, the entire review is forced to FAIL with a capped score of 50.
- Structured output: Verdict + numeric score + categorized issues + checklist (not just "looks good").
Use Cases
- Code review: Check for bugs, security issues, performance problems
- Content review: Verify accuracy, readability, SEO, audience fit
- Factual verification: Validate claims in AI-generated text
- Translation quality: Check accuracy and naturalness
- Data extraction: Verify completeness and correctness
- Any AI output: Summaries, reports, proposals, emails, etc.
Why Not Just Ask the Same AI to Review?
Self-review has systematic leniency bias. An LLM reviewing its own output shares the same blind spots that created the errors. Research shows models are 34% more likely to use confident language when hallucinating.
AgentDesk uses a separate reviewer invocation with adversarial prompting — fundamentally different from self-review.
Comparison
| Feature | AgentDesk MCP | Manual prompt | Braintrust | DeepEval |
|---|---|---|---|---|
| One-tool setup | Yes | No | No | No |
| Adversarial review | Yes | DIY | No | No |
| Dual reviewer | Yes | DIY | No | No |
| Anti-gaming validation | Yes | No | No | No |
| No SDK required | Yes | Yes | No | No |
| MCP native | Yes | No | No | No |
Limitations
- Prompt injection: Like all LLM-as-judge systems, adversarial inputs could attempt to manipulate reviewer verdicts. The anti-gaming validation layer mitigates superficial gaming, but determined adversarial inputs remain a challenge. For high-stakes use cases, combine with deterministic validation.
- BYOK cost: Each
review_outputcall makes 1 LLM API call;review_dualmakes 3. Factor this into your pipeline costs.
Hosted API (Separate Product)
For teams that prefer HTTP integration, a hosted REST API with additional features (agent marketplace, context learning, workflows) is available at agentdesk-blue.vercel.app.
Development
git clone https://github.com/Rih0z/agentdesk-mcp.git
cd agentdesk-mcp
npm install
npm test # 35 tests
npm run build
License
MIT
Built by EZARK Consulting | Web Version
Verwandte Server
Scout Monitoring MCP
SponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
SponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Kubeshark
MCP access to cluster-wide L4 and L7 network traffic, packets, APIs, and complete payloads.
Hashnet MCP (Hashgraph Online Registry Broker)
MCP server for agent discovery, registration, and chat via the Hashgraph Online Registry Broker.
MCP Simple OpenAI Assistant
A simple server for interacting with OpenAI assistants using an API key.
Python MCP Server for Code Graph Extraction
Extracts and analyzes Python code structures, focusing on import/export relationships.
Software Planning Tool
Facilitates software development planning through an interactive and structured approach.
Pistachio MobileDev MCP
Android + iOS development for non-technical users
Process Manager MCP
Manage system processes (start, stop, restart, monitor) via an MCP interface with automatic cleanup.
NativeWind
Transform Tailwind components to NativeWind 4.
MCP Server
A cross-platform MCP server implementation for Amazon Q and Claude, providing multiple tools in a single executable.
Aider MCP Server
An MCP server for offloading AI coding tasks to Aider, enhancing development efficiency and flexibility.