A bridge server connecting Model Context Protocol (MCP) with Agent-to-Agent (A2A) protocol.
A gateway server that bridges the Model Context Protocol (MCP) with the Agent-to-Agent (A2A) protocol, enabling MCP-compatible AI assistants (like Claude) to seamlessly interact with A2A agents.
This project serves as an integration layer between two cutting-edge AI agent protocols:
Model Context Protocol (MCP): Developed by Anthropic, MCP allows AI assistants to connect to external tools and data sources. It standardizes how AI applications and large language models connect to external resources in a secure, composable way.
Agent-to-Agent Protocol (A2A): Developed by Google, A2A enables communication and interoperability between different AI agents through a standardized JSON-RPC interface.
By bridging these protocols, this server allows MCP clients (like Claude) to discover, register, communicate with, and manage tasks on A2A agents through a unified interface.
🎉 The package is now available on PyPI!
# Run with default settings (stdio transport)
uvx mcp-a2a-gateway
# Run with HTTP transport for web clients
MCP_TRANSPORT=streamable-http MCP_PORT=10000 uvx mcp-a2a-gateway
# Run with custom data directory
MCP_DATA_DIR="/Users/your-username/Desktop/a2a_data" uvx mcp-a2a-gateway
# Run with specific version
uvx mcp-a2a-gateway==0.1.6
# Run with multiple environment variables
MCP_TRANSPORT=stdio MCP_DATA_DIR="/custom/path" LOG_LEVEL=DEBUG uvx mcp-a2a-gateway
# Clone and run locally
git clone https://github.com/yw0nam/MCP-A2A-Gateway.git
cd MCP-A2A-Gateway
# Run with uv
uv run mcp-a2a-gateway
# Run with uvx from local directory
uvx --from . mcp-a2a-gateway
# Run with custom environment for development
MCP_TRANSPORT=streamable-http MCP_PORT=8080 uvx --from . mcp-a2a-gateway
also support cloud deployed Agent
Agent Management
Communication
Task Management
Transport Support
Before you begin, ensure you have the following installed:
Run directly without installation using uvx
:
uvx mcp-a2a-gateway
git clone https://github.com/yw0nam/MCP-A2A-Gateway.git
cd MCP-A2A-Gateway
uv run mcp-a2a-gateway
uvx --from . mcp-a2a-gateway
Start the server with HTTP transport:
# Using uvx
MCP_TRANSPORT=streamable-http MCP_HOST=0.0.0.0 MCP_PORT=10000 uvx mcp-a2a-gateway
Start the server with SSE transport:
# Using uvx
MCP_TRANSPORT=sse MCP_HOST=0.0.0.0 MCP_PORT=10000 uvx mcp-a2a-gateway
The server can be configured using the following environment variables:
Variable | Default | Description |
---|---|---|
MCP_TRANSPORT | stdio | Transport type: stdio , streamable-http , or sse |
MCP_HOST | 0.0.0.0 | Host for HTTP/SSE transports |
MCP_PORT | 8000 | Port for HTTP/SSE transports |
MCP_PATH | /mcp | HTTP endpoint path |
MCP_DATA_DIR | data | Directory for persistent data storage |
MCP_REQUEST_TIMEOUT | 30 | Request timeout in seconds |
MCP_REQUEST_IMMEDIATE_TIMEOUT | 2 | Immediate response timeout in seconds |
LOG_LEVEL | INFO | Logging level: DEBUG , INFO , WARNING , ERROR |
Example .env file:
# Transport configuration
MCP_TRANSPORT=stdio
MCP_HOST=0.0.0.0
MCP_PORT=10000
MCP_PATH=/mcp
# Data storage
MCP_DATA_DIR=/Users/your-username/Desktop/data/a2a_gateway
# Timeouts
MCP_REQUEST_TIMEOUT=30
MCP_REQUEST_IMMEDIATE_TIMEOUT=2
# Logging
LOG_LEVEL=INFO
The A2A MCP Server supports multiple transport types:
stdio (default): Uses standard input/output for communication
streamable-http (recommended for web clients): HTTP transport with streaming support
sse: Server-Sent Events transport
Add below to VS Code settings.json for sse or http:
"mcpServers": {
"mcp_a2a_gateway": {
"url": "http://0.0.0.0:10000/mcp"
}
}
"mcpServers": {
"mcp_a2a_gateway": {
"type": "stdio",
"command": "uvx",
"args": ["mcp-a2a-gateway"],
"env": {
"MCP_TRANSPORT": "stdio",
"MCP_DATA_DIR": "/Users/your-username/Desktop/data/Copilot/a2a_gateway/"
}
}
}
"mcpServers": {
"mcp_a2a_gateway": {
"type": "stdio",
"command": "uvx",
"args": ["--from", "/path/to/MCP-A2A-Gateway", "mcp-a2a-gateway"],
"env": {
"MCP_TRANSPORT": "stdio",
"MCP_DATA_DIR": "/Users/your-username/Desktop/data/Copilot/a2a_gateway/"
}
}
}
"mcpServers": {
"mcp_a2a_gateway": {
"type": "stdio",
"command": "uv",
"args": [
"--directory",
"/path/to/MCP-A2A-Gateway",
"run",
"mcp-a2a-gateway"
],
"env": {
"MCP_TRANSPORT": "stdio",
"MCP_DATA_DIR": "/Users/your-username/Desktop/data/Copilot/a2a_gateway/"
}
}
}
Add this to claude_config.json
"mcpServers": {
"mcp_a2a_gateway": {
"command": "uvx",
"args": ["mcp-a2a-gateway"],
"env": {
"MCP_TRANSPORT": "stdio",
"MCP_DATA_DIR": "/Users/your-username/Desktop/data/Claude/a2a_gateway/"
}
}
}
Add this to claude_config.json
"mcpServers": {
"mcp_a2a_gateway": {
"command": "uvx",
"args": ["--from", "/path/to/MCP-A2A-Gateway", "mcp-a2a-gateway"],
"env": {
"MCP_TRANSPORT": "stdio",
"MCP_DATA_DIR": "/Users/your-username/Desktop/data/Claude/a2a_gateway/"
}
}
}
Add this to claude_config.json
"mcpServers": {
"mcp_a2a_gateway": {
"command": "uv",
"args": ["--directory", "/path/to/MCP-A2A-Gateway", "run", "mcp-a2a-gateway"],
"env": {
"MCP_TRANSPORT": "stdio",
"MCP_DATA_DIR": "/Users/your-username/Desktop/data/Claude/a2a_gateway/"
}
}
}
The server exposes the following MCP tools for integration with LLMs like Claude:
register_agent: Register an A2A agent with the bridge server
{
"name": "register_agent",
"arguments": {
"url": "http://localhost:41242"
}
}
list_agents: Get a list of all registered agents
{
"name": "list_agents",
"arguments": {"dummy": "" }
}
unregister_agent: Remove an A2A agent from the bridge server
{
"name": "unregister_agent",
"arguments": {
"url": "http://localhost:41242"
}
}
send_message: Send a message to an agent and get a task_id for the response
{
"name": "send_message",
"arguments": {
"agent_url": "http://localhost:41242",
"message": "What's the exchange rate from USD to EUR?",
"session_id": "optional-session-id"
}
}
get_task_result: Retrieve a task's result using its ID
{
"name": "get_task_result",
"arguments": {
"task_id": "b30f3297-e7ab-4dd9-8ff1-877bd7cfb6b1",
}
}
get_task_list: Get a list of all tasks and their statuses.
{
"name": "get_task_list",
"arguments": {}
}
We are actively developing and improving the gateway! We welcome contributions of all kinds. Here is our current development roadmap, focusing on creating a rock-solid foundation first.
This is our current focus. Our goal is to make the gateway as stable and easy to use as possible.
/health
endpoint to monitor the server's status.uvx
Want to contribute? Check out the issues tab or feel free to open a new one to discuss your ideas!
This project is licensed under the Apache License, Version 2.0 - see the LICENSE file for details.
This project uses automated publishing through GitHub Actions for seamless releases.
# Patch release (0.1.6 → 0.1.7)
./release.sh patch
# Minor release (0.1.6 → 0.2.0)
./release.sh minor
# Major release (0.1.6 → 1.0.0)
./release.sh major
The script will:
pyproject.toml
# Update version in pyproject.toml manually
# Then create and push a tag
git add pyproject.toml
git commit -m "chore: bump version to 0.1.7"
git tag v0.1.7
git push origin main
git push origin v0.1.7
v0.1.7
)To enable automated publishing, add your PyPI API token to GitHub Secrets:
Get PyPI API Token:
pypi-
)Add to GitHub Secrets:
PYPI_API_TOKEN
Test the Workflow:
For emergency releases or local testing:
# Build and get manual publish instructions
./publish.sh
# Or publish directly (with credentials configured)
uv build
uv publish
Sends notifications to Discord channels or users via a bot.
Integrates with the LinkedIn API, allowing interaction with your professional network and content.
Access Reddit's public API to browse frontpage posts, subreddit information, and read post comments.
Send notifications to Slack using OAuth bot tokens.
Send iMessage and SMS messages using the Sendblue API.
A Slack integration server that allows natural language interaction with the Slack API within the Cursor IDE.
A read-only MCP server by CData that enables LLMs to query live SendGrid data. Requires the external CData JDBC Driver for SendGrid.
Interact with Slack workspaces to read and send messages directly through your AI assistant.
Interact with Wizzypedia through the MediaWiki API, supporting both read-only and authenticated operations.
Allows AI assistants to request information and receive responses from humans via Slack.