Integrates with Microsoft's AutoGen framework to enable sophisticated multi-agent conversations via the Model Context Protocol.
A comprehensive MCP server that provides deep integration with Microsoft's AutoGen framework v0.9+, featuring the latest capabilities including prompts, resources, advanced workflows, and enhanced agent types. This server enables sophisticated multi-agent conversations through a standardized Model Context Protocol interface.
create_agent
- Create agents with advanced configurationscreate_workflow
- Build complete multi-agent workflowsget_agent_status
- Detailed agent metrics and health monitoringexecute_chat
- Enhanced two-agent conversationsexecute_group_chat
- Multi-agent group discussionsexecute_nested_chat
- Hierarchical conversation structuresexecute_swarm
- Swarm-based collaborative problem solvingexecute_workflow
- Run predefined workflow templatesmanage_agent_memory
- Handle agent learning and persistenceconfigure_teachability
- Enable/configure agent learning capabilitiesautogen-workflow
Create sophisticated multi-agent workflows with customizable parameters:
task_description
, agent_count
, workflow_type
code-review
Set up collaborative code review with specialized agents:
code
, language
, focus_areas
research-analysis
Deploy research teams for in-depth topic analysis:
topic
, depth
autogen://agents/list
Live list of active agents with status and capabilities
autogen://workflows/templates
Available workflow templates and configurations
autogen://chat/history
Recent conversation history and interaction logs
autogen://config/current
Current server configuration and settings
To install AutoGen Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @DynamicEndpoints/autogen_mcp --client claude
git clone https://github.com/yourusername/autogen-mcp.git
cd autogen-mcp
npm install
pip install -r requirements.txt --user
npm run build
cp .env.example .env
cp config.json.example config.json
# Edit .env and config.json with your settings
Create a .env
file from the template:
# Required
OPENAI_API_KEY=your-openai-api-key-here
# Optional - Path to configuration file
AUTOGEN_MCP_CONFIG=config.json
# Enhanced Features
ENABLE_PROMPTS=true
ENABLE_RESOURCES=true
ENABLE_WORKFLOWS=true
ENABLE_TEACHABILITY=true
# Performance Settings
MAX_CHAT_TURNS=10
DEFAULT_OUTPUT_FORMAT=json
Update config.json
with your preferences:
{
"llm_config": {
"config_list": [
{
"model": "gpt-4o",
"api_key": "your-openai-api-key"
}
],
"temperature": 0.7
},
"enhanced_features": {
"prompts": { "enabled": true },
"resources": { "enabled": true },
"workflows": { "enabled": true }
}
}
Add to your claude_desktop_config.json
:
{
"mcpServers": {
"autogen": {
"command": "node",
"args": ["path/to/autogen-mcp/build/index.js"],
"env": {
"OPENAI_API_KEY": "your-key-here"
}
}
}
}
Test the server functionality:
# Run comprehensive tests
python test_server.py
# Test CLI interface
python cli_example.py create_agent "researcher" "assistant" "You are a research specialist"
python cli_example.py execute_workflow "code_generation" '{"task":"Hello world","language":"python"}'
The server provides several built-in prompts:
Available resources provide real-time data:
autogen://agents/list
- Current active agentsautogen://workflows/templates
- Available workflow templatesautogen://chat/history
- Recent conversation historyautogen://config/current
- Server configuration{
"workflow_name": "code_generation",
"input_data": {
"task": "Create a REST API endpoint",
"language": "python",
"requirements": ["FastAPI", "Pydantic", "Error handling"]
},
"quality_checks": true
}
{
"workflow_name": "research",
"input_data": {
"topic": "AI Ethics in 2025",
"depth": "comprehensive"
},
"output_format": "markdown"
}
pip install -r requirements.txt --user
npm install
Enable detailed logging:
export LOG_LEVEL=DEBUG
python test_server.py
gpt-4o-mini
for faster, cost-effective operations# Full test suite
python test_server.py
# Individual workflow tests
python -c "
import asyncio
from src.autogen_mcp.workflows import WorkflowManager
wm = WorkflowManager()
print(asyncio.run(wm.execute_workflow('code_generation', {'task': 'test'})))
"
npm run build
npm run lint
For issues and questions:
test_server.py
MIT License - see LICENSE file for details.
OPENAI_API_KEY=your-openai-api-key
### Server Configuration
1. Copy `config.json.example` to `config.json`:
```bash
cp config.json.example config.json
{
"llm_config": {
"config_list": [
{
"model": "gpt-4",
"api_key": "your-openai-api-key"
}
],
"temperature": 0
},
"code_execution_config": {
"work_dir": "workspace",
"use_docker": false
}
}
The server supports three main operations:
{
"name": "create_agent",
"arguments": {
"name": "tech_lead",
"type": "assistant",
"system_message": "You are a technical lead with expertise in software architecture and design patterns."
}
}
{
"name": "execute_chat",
"arguments": {
"initiator": "agent1",
"responder": "agent2",
"message": "Let's discuss the system architecture."
}
}
{
"name": "execute_group_chat",
"arguments": {
"agents": ["agent1", "agent2", "agent3"],
"message": "Let's review the proposed solution."
}
}
Common error scenarios include:
{
"error": "Agent already exists"
}
{
"error": "Agent not found"
}
{
"error": "AUTOGEN_MCP_CONFIG environment variable not set"
}
The server follows a modular architecture:
src/
βββ autogen_mcp/
β βββ __init__.py
β βββ agents.py # Agent management and configuration
β βββ config.py # Configuration handling and validation
β βββ server.py # MCP server implementation
β βββ workflows.py # Conversation workflow management
MIT License - See LICENSE file for details
Programmatically control iOS simulators via stdio transport. Requires macOS with Xcode and installed iOS simulators.
Bring the full power of BrowserStackβs Test Platform to your AI tools, making testing faster and easier for every developer and tester on your team.
A local-first code indexer that enhances LLMs with deep code understanding. It integrates with AI assistants via the Model Context Protocol (MCP) and supports AI-powered semantic search.
Token-efficient access to OpenAPI/Swagger specs via MCP Resources
Tools for Xcode project management, building, testing, archiving, code signing, and iOS development utilities.
Check software end-of-life (EOL) dates and support status using the endoflife.date API to provide accurate lifecycle and security information.
Use command line tools in a secure fashion as MCP tools.
Manage Google Apps Script projects, including creation, editing, deployment, and execution. Requires Google Cloud credentials for authentication.
An MCP server for AI coding assistants to control, inspect, and modify Bevy applications using the Bevy Remote Protocol (BRP).
MCP server for secure command-line interactions on Windows systems, enabling controlled access to PowerShell, CMD, and Git Bash shells.