Enhanced AutoGen MCP Server
Integrates with Microsoft's AutoGen framework to enable sophisticated multi-agent conversations via the Model Context Protocol.
Enhanced AutoGen MCP Server
A comprehensive MCP server that provides deep integration with Microsoft's AutoGen framework v0.9+, featuring the latest capabilities including prompts, resources, advanced workflows, and enhanced agent types. This server enables sophisticated multi-agent conversations through a standardized Model Context Protocol interface.
š Latest Features (v0.2.0)
⨠Enhanced MCP Support
- Prompts: Pre-built templates for common workflows (code review, research, creative writing)
- Resources: Real-time access to agent status, chat history, and configurations
- Dynamic Content: Template-based prompts with arguments and embedded resources
- Latest MCP SDK: Version 1.12.3 with full feature support
š¤ Advanced Agent Types
- Assistant Agents: Enhanced with latest LLM capabilities
- Conversable Agents: Flexible conversation patterns
- Teachable Agents: Learning and memory persistence
- Retrievable Agents: Knowledge base integration
- Multimodal Agents: Image and document processing (when available)
š Sophisticated Workflows
- Code Generation: Architect ā Developer ā Reviewer ā Executor pipeline
- Research Analysis: Researcher ā Analyst ā Critic ā Synthesizer workflow
- Creative Writing: Multi-stage creative collaboration
- Problem Solving: Structured approach to complex problems
- Code Review: Security ā Performance ā Style review teams
- Custom Workflows: Build your own agent collaboration patterns
šÆ Enhanced Chat Capabilities
- Smart Speaker Selection: Auto, manual, random, round-robin modes
- Nested Conversations: Hierarchical agent interactions
- Swarm Intelligence: Coordinated multi-agent problem solving
- Memory Management: Persistent agent knowledge and preferences
- Quality Checks: Built-in validation and improvement loops
š ļø Available Tools
Core Agent Management
create_agent- Create agents with advanced configurationscreate_workflow- Build complete multi-agent workflowsget_agent_status- Detailed agent metrics and health monitoring
Conversation Execution
execute_chat- Enhanced two-agent conversationsexecute_group_chat- Multi-agent group discussionsexecute_nested_chat- Hierarchical conversation structuresexecute_swarm- Swarm-based collaborative problem solving
Workflow Orchestration
execute_workflow- Run predefined workflow templatesmanage_agent_memory- Handle agent learning and persistenceconfigure_teachability- Enable/configure agent learning capabilities
š Available Prompts
autogen-workflow
Create sophisticated multi-agent workflows with customizable parameters:
- Arguments:
task_description,agent_count,workflow_type - Use case: Rapid workflow prototyping and deployment
code-review
Set up collaborative code review with specialized agents:
- Arguments:
code,language,focus_areas - Use case: Comprehensive code quality assessment
research-analysis
Deploy research teams for in-depth topic analysis:
- Arguments:
topic,depth - Use case: Academic research, market analysis, technical investigation
š Available Resources
autogen://agents/list
Live list of active agents with status and capabilities
autogen://workflows/templates
Available workflow templates and configurations
autogen://chat/history
Recent conversation history and interaction logs
autogen://config/current
Current server configuration and settings
Installation
Installing via Smithery
To install AutoGen Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @DynamicEndpoints/autogen_mcp --client claude
Manual Installation
- Clone the repository:
git clone https://github.com/yourusername/autogen-mcp.git
cd autogen-mcp
- Install Node.js dependencies:
npm install
- Install Python dependencies:
pip install -r requirements.txt --user
- Build the TypeScript project:
npm run build
- Set up configuration:
cp .env.example .env
cp config.json.example config.json
# Edit .env and config.json with your settings
Configuration
Environment Variables
Create a .env file from the template:
# Required
OPENAI_API_KEY=your-openai-api-key-here
# Optional - Path to configuration file
AUTOGEN_MCP_CONFIG=config.json
# Enhanced Features
ENABLE_PROMPTS=true
ENABLE_RESOURCES=true
ENABLE_WORKFLOWS=true
ENABLE_TEACHABILITY=true
# Performance Settings
MAX_CHAT_TURNS=10
DEFAULT_OUTPUT_FORMAT=json
Configuration File
Update config.json with your preferences:
{
"llm_config": {
"config_list": [
{
"model": "gpt-4o",
"api_key": "your-openai-api-key"
}
],
"temperature": 0.7
},
"enhanced_features": {
"prompts": { "enabled": true },
"resources": { "enabled": true },
"workflows": { "enabled": true }
}
}
Usage Examples
Using with Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"autogen": {
"command": "node",
"args": ["path/to/autogen-mcp/build/index.js"],
"env": {
"OPENAI_API_KEY": "your-key-here"
}
}
}
}
Command Line Testing
Test the server functionality:
# Run comprehensive tests
python test_server.py
# Test CLI interface
python cli_example.py create_agent "researcher" "assistant" "You are a research specialist"
python cli_example.py execute_workflow "code_generation" '{"task":"Hello world","language":"python"}'
Using Prompts
The server provides several built-in prompts:
- autogen-workflow - Create multi-agent workflows
- code-review - Set up collaborative code review
- research-analysis - Deploy research teams
Accessing Resources
Available resources provide real-time data:
autogen://agents/list- Current active agentsautogen://workflows/templates- Available workflow templatesautogen://chat/history- Recent conversation historyautogen://config/current- Server configuration
Workflow Examples
Code Generation Workflow
{
"workflow_name": "code_generation",
"input_data": {
"task": "Create a REST API endpoint",
"language": "python",
"requirements": ["FastAPI", "Pydantic", "Error handling"]
},
"quality_checks": true
}
Research Workflow
{
"workflow_name": "research",
"input_data": {
"topic": "AI Ethics in 2025",
"depth": "comprehensive"
},
"output_format": "markdown"
}
Advanced Features
Agent Types
- Assistant Agents: LLM-powered conversational agents
- User Proxy Agents: Code execution and human interaction
- Conversable Agents: Flexible conversation patterns
- Teachable Agents: Learning and memory persistence (when available)
- Retrievable Agents: Knowledge base integration (when available)
Chat Modes
- Two-Agent Chat: Direct conversation between agents
- Group Chat: Multi-agent discussions with smart speaker selection
- Nested Chat: Hierarchical conversation structures
- Swarm Intelligence: Coordinated problem solving (experimental)
Memory Management
- Persistent agent memory across sessions
- Conversation history tracking
- Learning from interactions (teachable agents)
- Memory cleanup and optimization
Troubleshooting
Common Issues
- API Key Errors: Ensure your OpenAI API key is valid and has sufficient credits
- Import Errors: Install all dependencies with
pip install -r requirements.txt --user - Build Failures: Check Node.js version (>= 18) and run
npm install - Chat Failures: Verify agent creation succeeded before attempting conversations
Debug Mode
Enable detailed logging:
export LOG_LEVEL=DEBUG
python test_server.py
Performance Tips
- Use
gpt-4o-minifor faster, cost-effective operations - Enable caching for repeated operations
- Set appropriate timeout values for long-running workflows
- Use quality checks only when needed (increases execution time)
Development
Running Tests
# Full test suite
python test_server.py
# Individual workflow tests
python -c "
import asyncio
from src.autogen_mcp.workflows import WorkflowManager
wm = WorkflowManager()
print(asyncio.run(wm.execute_workflow('code_generation', {'task': 'test'})))
"
Building
npm run build
npm run lint
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
Version History
v0.2.0 (Latest)
- ⨠Enhanced MCP support with prompts and resources
- š¤ Advanced agent types (teachable, retrievable)
- š Sophisticated workflows with quality checks
- šÆ Smart speaker selection and nested conversations
- š Real-time resource monitoring
- š§ Memory management and persistence
v0.1.0
- Basic AutoGen integration
- Simple agent creation and chat execution
- MCP tool interface
Support
For issues and questions:
- Check the troubleshooting section above
- Review the test examples in
test_server.py - Open an issue on GitHub with detailed reproduction steps
License
MIT License - see LICENSE file for details.
OpenAI API Key (optional, can also be set in config.json)
OPENAI_API_KEY=your-openai-api-key
### Server Configuration
1. Copy `config.json.example` to `config.json`:
```bash
cp config.json.example config.json
- Configure the server settings:
{
"llm_config": {
"config_list": [
{
"model": "gpt-4",
"api_key": "your-openai-api-key"
}
],
"temperature": 0
},
"code_execution_config": {
"work_dir": "workspace",
"use_docker": false
}
}
Available Operations
The server supports three main operations:
1. Creating Agents
{
"name": "create_agent",
"arguments": {
"name": "tech_lead",
"type": "assistant",
"system_message": "You are a technical lead with expertise in software architecture and design patterns."
}
}
2. One-on-One Chat
{
"name": "execute_chat",
"arguments": {
"initiator": "agent1",
"responder": "agent2",
"message": "Let's discuss the system architecture."
}
}
3. Group Chat
{
"name": "execute_group_chat",
"arguments": {
"agents": ["agent1", "agent2", "agent3"],
"message": "Let's review the proposed solution."
}
}
Error Handling
Common error scenarios include:
- Agent Creation Errors
{
"error": "Agent already exists"
}
- Execution Errors
{
"error": "Agent not found"
}
- Configuration Errors
{
"error": "AUTOGEN_MCP_CONFIG environment variable not set"
}
Architecture
The server follows a modular architecture:
src/
āāā autogen_mcp/
ā āāā __init__.py
ā āāā agents.py # Agent management and configuration
ā āāā config.py # Configuration handling and validation
ā āāā server.py # MCP server implementation
ā āāā workflows.py # Conversation workflow management
License
MIT License - See LICENSE file for details
Related Servers
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Liveblocks
Interact with the Liveblocks REST API to manage rooms, threads, comments, and notifications, with read access to Storage and Yjs.
Symbolic Algebra MCP Server
Perform symbolic mathematics and computer algebra using the SymPy library.
MLflow MCP Server
Integrates with MLflow, enabling AI assistants to interact with experiments, runs, and registered models.
MCP-Mem0
Integrate long-term memory into AI agents using Mem0.
Read Docs MCP
Enables AI agents to access and understand package documentation from local or remote repositories.
Terry-Form MCP
Execute Terraform commands locally in a secure, containerized environment. Features LSP integration for intelligent Terraform development.
Overleaf MCP Server
MCP Server for Overleaf (Latex)
MCP Sequence Simulation Server
Simulate DNA and amino acid sequences using evolutionary models and algorithms.
APS AEC Data Model MCP (.NET)
A .NET MCP server for interacting with the Autodesk AEC Data Model API and Viewer.
Authless Remote MCP Server on Cloudflare
An example of a remote MCP server deployable on Cloudflare Workers without authentication.