MCP Hangar
Kubernetes-native registry for managing multiple MCP servers with lazy loading, health monitoring, and RBAC
MCP Hangar
Parallel MCP tool execution. One interface. 50x faster.
The Problem
Your AI agent calls 5 tools sequentially. Each takes 200ms. That's 1 second of waiting.
Hangar runs them in parallel. 200ms total. Same results, 50x faster.
Quick Start
30 seconds to working MCP providers:
# Install, configure, and start - zero interaction
curl -sSL https://get.mcp-hangar.io | bash && mcp-hangar init -y && mcp-hangar serve
That's it. Filesystem, fetch, and memory providers are now available to Claude.
- Install - Downloaded and installed
mcp-hangarvia pip/uv - Init - Created
~/.config/mcp-hangar/config.yamlwith starter providers - Serve - Started the MCP server (stdio mode for Claude Desktop)
The init -y flag uses sensible defaults:
- Detects available runtimes (uvx preferred, npx fallback)
- Configures starter bundle: filesystem, fetch, memory
- Updates Claude Desktop config automatically
Manual Setup
If you prefer step-by-step:
# 1. Install
pip install mcp-hangar
# or: uv pip install mcp-hangar
# 2. Initialize with wizard
mcp-hangar init
# 3. Start server
mcp-hangar serve
Custom Configuration
Create ~/.config/mcp-hangar/config.yaml:
providers:
github:
mode: subprocess
command: [uvx, mcp-server-github]
env:
GITHUB_TOKEN: ${GITHUB_TOKEN}
slack:
mode: subprocess
command: [uvx, mcp-server-slack]
internal-api:
mode: remote
endpoint: "http://localhost:8080"
Claude Desktop is auto-configured by mcp-hangar init. Manual setup:
Add to Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"hangar": {
"command": "mcp-hangar",
"args": ["serve", "--config", "~/.config/mcp-hangar/config.yaml"]
}
}
}
Restart Claude Desktop. Done.
One Interface
hangar_call([
{"provider": "github", "tool": "search_repos", "arguments": {"query": "mcp"}},
{"provider": "slack", "tool": "post_message", "arguments": {"channel": "#dev"}},
{"provider": "internal-api", "tool": "get_status", "arguments": {}}
])
Single call. Parallel execution. All results returned together.
Benchmarks
| Scenario | Sequential | Hangar | Speedup |
|---|---|---|---|
| 15 tools, 2 providers | ~20s | 380ms | 50x |
| 50 concurrent requests | ~15s | 1.3s | 10x |
| Cold start + batch | ~5s | <500ms | 10x |
100% success rate. <10ms framework overhead.
Why It's Fast
Single-flight cold starts. When 10 parallel calls hit a cold provider, it initializes once — not 10 times.
Automatic concurrency. Configurable parallelism with backpressure. No thundering herd.
Provider pooling. Hot providers stay warm. Cold providers spin up on demand, shut down after idle TTL.
Production Ready
Lifecycle management. Lazy loading, health checks, automatic restart, graceful shutdown.
Circuit breaker. One failing provider doesn't kill your batch. Automatic isolation and recovery.
Observability. Correlation IDs across parallel calls. OpenTelemetry traces, Prometheus metrics.
Multi-provider. Subprocess, Docker, remote HTTP — mix them in a single batch call.
Configuration
providers:
- id: fast-provider
command: ["python", "fast.py"]
idle_ttl_s: 300 # Shutdown after 5min idle
health_check_interval_s: 60 # Check health every minute
max_consecutive_failures: 3 # Circuit breaker threshold
- id: docker-provider
image: my-registry/mcp-server:latest
network: bridge
- id: remote-provider
url: "https://api.example.com/mcp"
Works Everywhere
- Home lab: 2 providers, zero config complexity
- Team setup: Shared providers, Docker containers
- Enterprise: 50+ providers, observability stack, Kubernetes
Same API. Same reliability. Different scale.
Documentation
License
MIT — use it, fork it, ship it.
Related Servers
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Angreal
An MCP server providing AI assistants with discovery capabilities for angreal projects.
Qase MCP Server
An MCP server for interacting with the Qase test management platform.
MCP Server + Github OAuth
An MCP server with built-in GitHub OAuth support, designed for deployment on Cloudflare Workers.
SeaLights
An MCP server for interacting with the SeaLights platform for quality intelligence.
Python Weather Server
A FastAPI-based server that provides weather information from the National Weather Service API, secured with OAuth 2.1.
PCM
A server for reverse engineering tasks using the pcm toolkit. Requires a local clone of the pcm repository.
FileForge
Transforms raw code into polished solutions with optimized performance and vector embeddings support.
WordPress MCP
A Python MCP server for interacting with a local WordPress instance.
GitHub Actions
An MCP Server for the GitHub Actions API, enabling AI assistants to manage and operate GitHub Actions workflows.
Vibe Coder
An advanced MCP server for semantic routing, code generation, workflows, and AI-assisted development.