AgentDesk MCP
Adversarial AI quality review for LLM pipelines. Dual-reviewer consensus with anti-gaming protection. BYOK — works with Claude Code, Claude Desktop, and any MCP client.
AgentDesk MCP — Adversarial AI Review
Quality control for AI pipelines — one MCP tool. Works with Claude Code, Claude Desktop, and any MCP client.
29.5% of teams do NO evaluation of AI outputs. (LangChain Survey) Knowledge workers spend 4.3 hours/week fact-checking AI outputs. (Microsoft 2025)
AgentDesk MCP fixes this. Add independent adversarial review to any AI pipeline in 30 seconds.
Quick Start
npm (recommended)
npx agentdesk-mcp
Claude Code
claude mcp add agentdesk-mcp -- npx agentdesk-mcp
Claude Desktop
{
"mcpServers": {
"agentdesk-mcp": {
"command": "npx",
"args": ["-y", "agentdesk-mcp"],
"env": { "ANTHROPIC_API_KEY": "sk-ant-..." }
}
}
}
Install from GitHub (alternative)
npm install github:Rih0z/agentdesk-mcp
Requirements
ANTHROPIC_API_KEYenvironment variable (uses your own key — BYOK)
Tools
review_output
Adversarial quality review of any AI-generated output. An independent reviewer assumes the author made mistakes and actively looks for problems.
Input:
| Parameter | Required | Description |
|---|---|---|
output | Yes | The AI-generated output to review |
criteria | No | Custom review criteria |
review_type | No | Category: code, content, factual, translation, etc. |
model | No | Reviewer model (default: claude-sonnet-4-6) |
Output:
{
"verdict": "PASS | FAIL | CONDITIONAL_PASS",
"score": 82,
"issues": [
{
"severity": "high",
"category": "accuracy",
"description": "Claim about X is unsupported",
"suggestion": "Add citation or remove claim"
}
],
"checklist": [
{
"item": "Factual accuracy",
"status": "pass",
"evidence": "All statistics match cited sources"
}
],
"summary": "Overall assessment...",
"reviewer_model": "claude-sonnet-4-6"
}
review_dual
Dual adversarial review — two independent reviewers assess the output from different angles, then a merge agent combines findings.
- If either reviewer finds a critical issue → merged verdict is FAIL
- Takes the lower score
- Combines and deduplicates all issues
Use for high-stakes outputs where quality is critical.
Same parameters as review_output.
How It Works
- Adversarial prompting: The reviewer is instructed to assume mistakes were made. No benefit of the doubt.
- Evidence-based checklist: Every PASS item requires specific evidence. Items without evidence are automatically downgraded to FAIL.
- Anti-gaming validation: If >30% of checklist items lack evidence, the entire review is forced to FAIL with a capped score of 50.
- Structured output: Verdict + numeric score + categorized issues + checklist (not just "looks good").
Use Cases
- Code review: Check for bugs, security issues, performance problems
- Content review: Verify accuracy, readability, SEO, audience fit
- Factual verification: Validate claims in AI-generated text
- Translation quality: Check accuracy and naturalness
- Data extraction: Verify completeness and correctness
- Any AI output: Summaries, reports, proposals, emails, etc.
Why Not Just Ask the Same AI to Review?
Self-review has systematic leniency bias. An LLM reviewing its own output shares the same blind spots that created the errors. Research shows models are 34% more likely to use confident language when hallucinating.
AgentDesk uses a separate reviewer invocation with adversarial prompting — fundamentally different from self-review.
Comparison
| Feature | AgentDesk MCP | Manual prompt | Braintrust | DeepEval |
|---|---|---|---|---|
| One-tool setup | Yes | No | No | No |
| Adversarial review | Yes | DIY | No | No |
| Dual reviewer | Yes | DIY | No | No |
| Anti-gaming validation | Yes | No | No | No |
| No SDK required | Yes | Yes | No | No |
| MCP native | Yes | No | No | No |
Limitations
- Prompt injection: Like all LLM-as-judge systems, adversarial inputs could attempt to manipulate reviewer verdicts. The anti-gaming validation layer mitigates superficial gaming, but determined adversarial inputs remain a challenge. For high-stakes use cases, combine with deterministic validation.
- BYOK cost: Each
review_outputcall makes 1 LLM API call;review_dualmakes 3. Factor this into your pipeline costs.
Hosted API (Separate Product)
For teams that prefer HTTP integration, a hosted REST API with additional features (agent marketplace, context learning, workflows) is available at agentdesk-blue.vercel.app.
Development
git clone https://github.com/Rih0z/agentdesk-mcp.git
cd agentdesk-mcp
npm install
npm test # 35 tests
npm run build
License
MIT
Built by EZARK Consulting | Web Version
관련 서버
Scout Monitoring MCP
스폰서Put performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
스폰서Access financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
MCP Stdio Server
An MCP server using stdio transport, offering file system access, a calculator, and a code review tool. Requires Node.js.
Cursor History MCP
Best MCP server for browsing, searching, backup, and exporting Cursor AI chat history.
MCP Documentation Server
Integrates LLM applications with documentation sources using the Model Context Protocol.
XTQuantAI
Integrates the xtquant quantitative trading platform with an AI assistant, enabling AI to access and operate quantitative trading data and functions.
Command Executor
Execute pre-approved shell commands securely on a server.
UUID MCP Provider
Generates timestamp-based unique identifiers using UUID v7.
Stock Ticker MCP Server
A demo MCP server that provides rude responses to stock queries.
DevContext
Provides developers with continuous, project-centric context awareness. Requires a TursoDB database.
Godot MCP
MCP server for interacting with the Godot game engine, providing tools for editing, running, debugging, and managing scenes in Godot projects.
Jupyter Earth MCP Server
Provides tools for geospatial analysis within Jupyter notebooks.