Access Prometheus metrics and queries through standardized MCP interfaces.
A comprehensive Model Context Protocol (MCP) server for Prometheus, written in Go.
This provides complete access to your Prometheus metrics, queries, and system information through standardized MCP interfaces, allowing AI assistants to execute PromQL queries, discover metrics, explore labels, and analyze your monitoring infrastructure.
Download the latest binary for your platform from the releases page.
git clone https://github.com/giantswarm/mcp-prometheus.git
cd mcp-prometheus
go build -o mcp-prometheus
Configure the MCP server through environment variables (all optional):
# Optional: Default Prometheus server configuration
export PROMETHEUS_URL=http://your-prometheus-server:9090
# Optional: Authentication credentials (choose one)
# For basic auth
export PROMETHEUS_USERNAME=your_username
export PROMETHEUS_PASSWORD=your_password
# For bearer token auth
export PROMETHEUS_TOKEN=your_token
# Optional: Default organization ID for multi-tenant setups
export PROMETHEUS_ORGID=your_organization_id
Dynamic Configuration: All MCP tools support prometheus_url
and org_id
parameters for per-query configuration, allowing you to query multiple Prometheus instances and organizations dynamically.
Start the server with stdio transport (default):
./mcp-prometheus
Start with HTTP transport for web-based clients:
./mcp-prometheus serve --transport sse --http-addr :8080
Add the server configuration to your MCP client. For example, with Claude Desktop:
{
"mcpServers": {
"prometheus": {
"command": "/path/to/mcp-prometheus",
"args": ["serve"],
"env": {
"PROMETHEUS_URL": "http://your-prometheus-server:9090",
"PROMETHEUS_ORGID": "your-default-org"
}
}
}
}
Tool | Description | Required Parameters | Optional Parameters |
---|---|---|---|
mcp_prometheus_execute_query | Execute PromQL instant query | query | prometheus_url , org_id , time , timeout , limit , stats , lookback_delta , unlimited |
mcp_prometheus_execute_range_query | Execute PromQL range query | query , start , end , step | prometheus_url , org_id , timeout , limit , stats , lookback_delta , unlimited |
Tool | Description | Required Parameters | Optional Parameters |
---|---|---|---|
mcp_prometheus_list_metrics | List all available metrics | - | prometheus_url , org_id , start_time , end_time , matches |
mcp_prometheus_get_metric_metadata | Get metadata for specific metric | metric | prometheus_url , org_id , limit |
Tool | Description | Required Parameters | Optional Parameters |
---|---|---|---|
mcp_prometheus_list_label_names | Get all available label names | - | prometheus_url , org_id , start_time , end_time , matches , limit |
mcp_prometheus_list_label_values | Get values for specific label | label | prometheus_url , org_id , start_time , end_time , matches , limit |
mcp_prometheus_find_series | Find series by label matchers | matches | prometheus_url , org_id , start_time , end_time , limit |
Tool | Description | Required Parameters | Optional Parameters |
---|---|---|---|
mcp_prometheus_get_targets | Get scrape target information | - | prometheus_url , org_id |
mcp_prometheus_get_build_info | Get Prometheus build information | - | prometheus_url , org_id |
mcp_prometheus_get_runtime_info | Get Prometheus runtime information | - | prometheus_url , org_id |
mcp_prometheus_get_flags | Get Prometheus runtime flags | - | prometheus_url , org_id |
mcp_prometheus_get_config | Get Prometheus configuration | - | prometheus_url , org_id |
mcp_prometheus_get_tsdb_stats | Get TSDB cardinality statistics | - | prometheus_url , org_id , limit |
Tool | Description | Required Parameters | Optional Parameters |
---|---|---|---|
mcp_prometheus_get_alerts | Get active alerts | - | prometheus_url , org_id |
mcp_prometheus_get_alertmanagers | Get AlertManager discovery info | - | prometheus_url , org_id |
mcp_prometheus_get_rules | Get recording and alerting rules | - | prometheus_url , org_id |
Tool | Description | Required Parameters | Optional Parameters |
---|---|---|---|
mcp_prometheus_query_exemplars | Query exemplars for traces | query , start , end | prometheus_url , org_id |
mcp_prometheus_get_targets_metadata | Get metadata from specific targets | - | prometheus_url , org_id , match_target , metric , limit |
Connection Parameters (available on all tools):
prometheus_url
: Prometheus server URL (e.g., 'http://localhost:9090')org_id
: Organization ID for multi-tenant setups (e.g., 'tenant-123')Query Enhancement Parameters:
timeout
: Query timeout (e.g., '30s', '1m', '5m')limit
: Maximum number of returned entriesstats
: Include query statistics ('all')lookback_delta
: Query lookback delta (e.g., '5m')unlimited
: Set to 'true' for unlimited output (WARNING: may impact performance)Time Filtering Parameters:
start_time
, end_time
: RFC3339 timestamps for filteringmatches
: Array of label matchers (e.g., ['{job="prometheus"}', '{__name__=~"http_.*"}']
){
"query": "up",
"prometheus_url": "http://prometheus:9090",
"org_id": "production"
}
{
"query": "rate(http_requests_total[5m])",
"prometheus_url": "http://prometheus:9090",
"timeout": "30s",
"limit": "100",
"stats": "all"
}
{
"query": "cpu_usage_percent",
"start": "2025-01-27T00:00:00Z",
"end": "2025-01-27T01:00:00Z",
"step": "1m",
"prometheus_url": "http://prometheus:9090"
}
{
"prometheus_url": "http://prometheus:9090",
"matches": ["up{job=\"kubernetes-nodes\"}"],
"limit": "20"
}
{
"matches": ["{__name__=~\"http_.*\", job=\"api-server\"}"],
"prometheus_url": "http://prometheus:9090",
"start_time": "2025-01-27T00:00:00Z",
"limit": "50"
}
{
"query": "container_memory_usage_bytes",
"prometheus_url": "http://cortex-gateway:8080/prometheus",
"org_id": "team-platform"
}
The MCP server includes intelligent query result handling:
Example truncation message:
ā ļø RESULT TRUNCATED: The query returned a very large result (>50k characters).
š” To optimize your query and get less output, consider:
⢠Adding more specific label filters: {app="specific-app", namespace="specific-ns"}
⢠Using aggregation functions: sum(), avg(), count(), etc.
⢠Using topk() or bottomk() to get only top/bottom N results
š§ To get the full untruncated result, add "unlimited": "true" to your query parameters.
The server supports multiple transport protocols:
./mcp-prometheus serve --transport stdio
./mcp-prometheus serve --transport sse --http-addr :8080
./mcp-prometheus serve --transport streamable-http --http-addr :8080
The server follows a modern, DRY architecture:
mcp-prometheus/
āāā cmd/ # CLI commands and version info
āāā internal/
ā āāā server/ # Server context and configuration
ā āāā tools/prometheus/ # 18 comprehensive MCP tools
āāā main.go # Application entry point
āāā go.mod # Go dependencies
āāā README.md # This documentation
# Build the binary
go build -o mcp-prometheus
# Run tests
go test ./...
export PROMETHEUS_USERNAME=myuser
export PROMETHEUS_PASSWORD=mypassword
export PROMETHEUS_TOKEN=my-bearer-token
Perfect for Cortex, Mimir, Thanos, and other multi-tenant setups:
export PROMETHEUS_ORGID=tenant-123
Comprehensive error handling with detailed messages:
git checkout -b feature/amazing-feature
)git commit -m 'Add some amazing feature'
)git push origin feature/amazing-feature
)This project is licensed under the same terms as the original Python implementation.
Provides safe, read-only access to Kubernetes cluster resources for debugging and inspection.
Advanced text-to-image generation model using the fal.ai API. Requires a FAL_KEY environment variable.
Streams real-time Binance Alpha token prices and liquidity data for AI agents and workflows.
Integrate with Salesforce to perform actions like testing connections and running queries.
Manage Akamai's edge platform, including properties, DNS, certificates, security, and performance optimization, using AI assistants.
A Model Context Protocol (MCP) server enabling interaction with Google Admin APIs.
Official MCP Server to deploy to Google Cloud Run.
An MCP server for Zuora, powered by the CData JDBC Driver. Requires a separate driver and configuration file for connection.
A server for interacting with the OpenAI API. Requires an API key.
Interact with your Shopify store's data using the GraphQL API.