mpc-bridge
http stream to stdin/stdout and back
Go MCP HTTP Bridge - Master Documentation
Overview
A Go application that wraps MCP (Model Context Protocol) servers with HTTP streaming (SSE), fully compatible with llama.cpp's StreamableHTTP transport. This bridge allows web-based MCP clients to communicate with subprocess-based MCP servers.
Architecture
┌─────────────────┐
│ Client │
│ (LLM/App) │
└────────┬────────┘
│
│ HTTP POST + SSE Stream
▼
┌──────────────────────────────┐
│ Go MCP HTTP Bridge │
│ ┌────────────────────────┐ │
│ │ HTTP Server │ │
│ │ - POST /mcp/{ns}/msg │ │
│ │ - GET /mcp/{ns} (SSE) │ │
│ └────────────────────────┘ │
│ ┌────────────────────────┐ │
│ │ Protocol Handler │ │
│ │ - initialize │ │
│ │ - tools/list │ │
│ │ - tools/call │ │
│ └────────────────────────┘ │
│ ┌────────────────────────┐ │
│ │ Subprocess Manager │ │
│ │ - test-server │ │
│ └────────────────────────┘ │
└──────────────────────────────┘
Features
✅ Implemented (All Phases 1-6 Complete)
-
HTTP Streaming Server
- SSE streaming (
GET /mcp/{namespace}) - HTTP POST endpoint (
POST /mcp/{namespace}/message) - Proper CORS support
- Connection lifecycle management
- SSE streaming (
-
Subprocess Management
- On-demand spawning
- Connection reuse
- Exponential backoff restarts (1s, 2s, 4s... up to 60s)
- Graceful shutdown (SIGTERM/SIGINT handling)
- Process state tracking
-
JSON-RPC 2.0
- Request/response parsing
- Message validation
- Error handling
- Notification support
-
MCP Protocol Support
initialize- Handshake with server infotools/list- List available toolstools/call- Execute toolsping- Health check
-
Security
- Input validation (JSON-RPC, arguments)
- Command injection prevention
- CORS and origin validation
- Message size limits (1MB request, 10MB response)
- Connection limits (configurable, default: 5)
-
Logging
- Structured JSON logging with
slog - Context-aware logs (namespace, session, request ID)
- Connection event tracking
- Subprocess event logging
- Structured JSON logging with
-
Debug Tools
/debug- Dashboard/debug/stream- Real-time message log
-
Health Checks
/health- Bridge status/health/{namespace}- Per-server status
-
Prometheus Metrics
/metricsendpoint- HTTP request metrics (mcp_http_requests_total, mcp_http_request_duration_seconds)
- Tool call metrics (mcp_tool_calls_total, mcp_tool_calls_duration_seconds, mcp_tool_errors_total)
- Active sessions gauge (mcp_active_sessions)
- Subprocess state metrics (mcp_subprocess_state)
-
Enhanced Debug Streaming
/debug/stream- Real-time DEBUG and INFO logs- JSON-RPC 2.0 compliant SSE format
- No verbose flag needed
🔧 Configuration
Full configuration example:
bridge:
port: 8080
allowed_origins:
- http://localhost:3000
- http://127.0.0.1:3000
request_timeout: 30s
idle_timeout: 5m
connection_limit: 5
request_size_limit: 1048576
response_size_limit: 10485760
servers:
filesystem:
name: "filesystem"
binary: "/usr/bin/node"
args:
- "/path/to/mcp-filesystem-server/index.js"
env:
HOME: "/home/user"
timeout: 30s
max_restarts: 3
auto_start: true
git:
name: "git"
binary: "npx"
args:
- "-y"
- "@modelcontextprotocol/server-git"
timeout: 60s
max_restarts: 5
auto_start: false
For complete configuration options, see docs/CONFIG.md.
🚀 Quick Start
-
Build
cd go-mcp-bridge make build -
Configure Create
config.yamlwith your MCP server settings:bridge: port: 8080 allowed_origins: - http://localhost:3000 servers: git: name: "git" binary: "npx" args: - "-y" - "@modelcontextprotocol/server-git" -
Run
./bin/bridge --config config.yaml -
Test
# Health check curl http://localhost:8080/health # Send initialize request curl -X POST http://localhost:8080/mcp/git \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{...}}' # Check metrics curl http://localhost:8080/metrics # View debug stream curl http://localhost:8080/debug/stream
📡 API Endpoints
Primary MCP Endpoint
GET /mcp/{namespace} → Start SSE stream
POST /mcp/{namespace}/message → Send JSON-RPC request
Example Request:
curl -X POST http://localhost:8080/mcp/test/message \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{...}}'
Health Check
GET /health → Bridge health
GET /health/{namespace} → Namespace health
Debug Endpoint
GET /debug → Debug dashboard (HTML)
GET /debug/stream → Debug message stream (SSE)
Project Structure
go-mcp-bridge/
├── cmd/
│ ├── bridge/main.go # Main entry point
│ ├── test-server/ # Go test MCP server
│ └── test-mcp-server/ # TypeScript test MCP server
├── internal/
│ ├── config/
│ │ └── loader.go # YAML config parser
│ ├── mcp/
│ │ ├── handler.go # Protocol handler
│ │ ├── jsonrpc.go # JSON-RPC 2.0
│ │ └── types.go # MCP types
│ ├── process/
│ │ └── manager.go # Subprocess manager
│ ├── router/
│ │ └── namespace.go # Namespace routing
│ └── server/
│ ├── http.go # HTTP server
│ ├── sse.go # SSE writer
│ └── metrics.go # Prometheus metrics
├── docs/ # Documentation
│ ├── CONFIG.md # Configuration reference
│ ├── API.md # API specification
│ ├── EXAMPLES.md # Usage examples
│ ├── DEBUG.md # Debug endpoint guide
│ └── TESTING.md # Testing guide
├── testdata/ # Test configurations
├── bin/ # Built binaries
├── config.yaml # Configuration
├── Makefile # Build automation
├── test-tool-metrics.sh # Tool metrics test script
└── README.md # This file
Testing
Unit Tests
# Run all unit tests
go test ./... -v
# Specific package
go test ./internal/mcp/... -v
End-to-End Test
# Start bridge in background
./bin/bridge --config config.yaml &
BRIDGE_PID=$!
# Run tests
./test.sh
# Stop bridge
kill $BRIDGE_PID
Integration Tests (Complete)
# Test with real subprocess
./test-tool-metrics.sh
# Or run all tests
go test ./... -v
Git Branches
master- Current stable (All phases 1-6 complete)
Roadmap
✅ Phase 1-2: Core Architecture (COMPLETE)
- ✅ HTTP streaming server
- ✅ Subprocess management
- ✅ Basic JSON-RPC handling
✅ Phase 3: MCP Protocol Support (COMPLETE)
- ✅ JSON-RPC 2.0 handling
- ✅ Initialize method
- ✅ Tools/list and tools/call
- ✅ Request-response correlation
✅ Phase 4: Security & Error Handling (COMPLETE)
- ✅ Input validation (JSON-RPC, arguments)
- ✅ Command injection prevention
- ✅ Resource limits (size, connections)
- ✅ Exponential backoff restarts
- ✅ Graceful shutdown
- ✅ Structured JSON logging
✅ Phase 5: Testing & Documentation (COMPLETE)
- ✅ Unit tests (45 tests)
- JSON-RPC parsing (20 tests)
- Config loading (14 tests)
- Namespace routing (11 tests)
- ✅ Integration tests (45+ tests)
- HTTP server tests (20+ tests)
- Process manager tests (15+ tests)
- Namespace isolation tests (10 tests)
- ✅ Complete documentation (docs/CONFIG.md, docs/API.md, docs/EXAMPLES.md, docs/DEBUG.md, docs/TESTING.md)
✅ Phase 6: Build & Monitoring (COMPLETE)
- ✅ 6.1 Prometheus metrics (/metrics endpoint)
- ✅ HTTP request metrics (mcp_http_requests_total, mcp_http_request_duration_seconds)
- ✅ Active sessions gauge (mcp_active_sessions)
- ✅ Subprocess state metrics (mcp_subprocess_state)
- ✅ Tool call metrics (mcp_tool_calls_total, mcp_tool_calls_duration_seconds, mcp_tool_errors_total)
- ✅ Metrics endpoint at /metrics
- ✅ 6.2 Structured logging with slog
- ✅ Replace fmt.Printf with slog
- ✅ JSON structured logging
- ✅ Context-aware logs (namespace, session, request_id)
- ✅ 6.3 Enhanced debug streaming
- ✅ Stream DEBUG and INFO logs to /debug/stream
- ✅ JSON-RPC 2.0 compliant SSE format
- ✅ No verbose flag needed
- ✅ 6.4 Configuration cleanup
- ✅ Remove unused debug_port configuration
- ✅ Keep debug endpoints on main port 8080
- ✅ 6.5 Test script for tool metrics (test-tool-metrics.sh)
- ✅ 6.6 End-to-end verification of tool metrics recording
All phases complete as of Phase 6 merge to master.
Compatibility
✅ llama.cpp web frontend
- Compatible with StreamableHTTP transport
- Works with latest llama.cpp build
✅ MCP SDK
- Uses standard JSON-RPC 2.0 format
- Supports MCP protocol versions:
- 2025-06-18 (latest)
- 2025-03-26 (default)
- 2024-11-05 (backward compat)
Known MCP Servers
These servers can be used with the bridge:
-
Git MCP (NPM)
servers: git: name: "git" binary: "npx" args: ["-y", "@modelcontextprotocol/server-git"] -
Filesystem MCP (uvx)
servers: filesystem: name: "filesystem" binary: "uvx" args: ["mcp-server-filesystem", "--allowed-directory", "/data"] -
Memory MCP (Docker)
servers: memory: name: "memory" binary: "docker" args: ["run", "-i", "mcp/memory-server"] -
Custom Binary
servers: custom: name: "custom" binary: "./my-mcp-server" args: ["--port", "8080"]
See EXAMPLES.md for more configuration examples.
Troubleshooting
Subprocess won't start
- Check binary path is correct:
which npx - Verify binary is executable:
chmod +x ./bin/test-server - Check environment variables in config
- Look at structured logs for errors
Connection fails
- Verify namespace exists in config
- Check
allowed_originsincludes client origin - Look at
/debug/streamfor real-time errors - Check
/health/{namespace}for subprocess status
Messages not streaming
- Ensure subprocess outputs JSON-RPC to stdout
- Check for newline termination on messages (
\n) - Verify SSE headers are set correctly
- Use
/debug/streamto see raw messages
High restart count
- Check subprocess logs for errors
- Increase timeout if subprocess is slow
- Verify args are correct
- Check resource limits
Security errors
- Verify argument sanitization
- Check allowed hosts/origins
- Ensure binary path is safe (no shell injection)
Documentation
| Document | Description |
|---|---|
| docs/CONFIG.md | Complete configuration reference |
| docs/API.md | HTTP API specification |
| docs/EXAMPLES.md | Usage examples and configurations |
| docs/DEBUG.md | Debug endpoint guide |
| docs/TESTING.md | Testing guide and test infrastructure |
Contributing
Contributions welcome! Please:
- Create a feature branch
- Add tests for new functionality
- Update documentation
- Submit a pull request
License
MIT
Related Servers
Say MCP Server
A server for voice notifications using VoiceBox, with a fallback to the Mac 'say' command.
Slack
The most powerful MCP server for Slack Workspaces. This integration supports both Stdio and SSE transports, proxy settings and does not require any permissions or bots being created or approved by Workspace admins 😏.
Ayni Protocol
Visual coordination protocol for AI agents — 22 MCP tools for glyph-based communication with 50-70% token savings, shared memory, governance, and on-chain attestation.
Channel.io
Integrate with the Channel Talk API to let AI assistants access and utilize chat information.
Voice Mode
A server for natural voice conversations with AI assistants like Claude and ChatGPT.
CData Email Server
A read-only MCP server for querying live email data using the CData JDBC Driver.
Telegram MCP Server
Interact with a Telegram account using the user client API, allowing AI assistants to manage chats and messages.
ClickSend MCP Server
Send SMS messages and make Text-to-Speech (TTS) calls using the ClickSend API.
MCP LinkedIn
Interact with LinkedIn using an unofficial API, requiring email and password for authentication.
AgentRPC
Connect to any function, any language, across network boundaries using AgentRPC.