Interact with Honeycomb observability data using the Model Context Protocol.
A Model Context Protocol server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets across multiple environments.
Honeycomb MCP is effectively a complete alternative interface to Honeycomb, and thus you need broad permissions for the API.
Currently, this is only available for Honeycomb Enterprise customers.
Today, this is a single server process that you must run on your own computer. It is not authenticated. All information uses STDIO between your client and the server.
pnpm install
pnpm run build
The build artifact goes into the /build
folder.
To use this MCP server, you need to provide Honeycomb API keys via environment variables in your MCP config.
{
"mcpServers": {
"honeycomb": {
"command": "node",
"args": [
"/fully/qualified/path/to/honeycomb-mcp/build/index.mjs"
],
"env": {
"HONEYCOMB_API_KEY": "your_api_key"
}
}
}
}
For multiple environments:
{
"mcpServers": {
"honeycomb": {
"command": "node",
"args": [
"/fully/qualified/path/to/honeycomb-mcp/build/index.mjs"
],
"env": {
"HONEYCOMB_ENV_PROD_API_KEY": "your_prod_api_key",
"HONEYCOMB_ENV_STAGING_API_KEY": "your_staging_api_key"
}
}
}
}
Important: These environment variables must bet set in the env
block of your MCP config.
EU customers must also set a HONEYCOMB_API_ENDPOINT
configuration, since the MCP defaults to the non-EU instance.
# Optional custom API endpoint (defaults to https://api.honeycomb.io)
HONEYCOMB_API_ENDPOINT=https://api.eu1.honeycomb.io/
The MCP server implements caching for all non-query Honeycomb API calls to improve performance and reduce API usage. Caching can be configured using these environment variables:
# Enable/disable caching (default: true)
HONEYCOMB_CACHE_ENABLED=true
# Default TTL in seconds (default: 300)
HONEYCOMB_CACHE_DEFAULT_TTL=300
# Resource-specific TTL values in seconds (defaults shown)
HONEYCOMB_CACHE_DATASET_TTL=900 # 15 minutes
HONEYCOMB_CACHE_COLUMN_TTL=900 # 15 minutes
HONEYCOMB_CACHE_BOARD_TTL=900 # 15 minutes
HONEYCOMB_CACHE_SLO_TTL=900 # 15 minutes
HONEYCOMB_CACHE_TRIGGER_TTL=900 # 15 minutes
HONEYCOMB_CACHE_MARKER_TTL=900 # 15 minutes
HONEYCOMB_CACHE_RECIPIENT_TTL=900 # 15 minutes
HONEYCOMB_CACHE_AUTH_TTL=3600 # 1 hour
# Maximum cache size (items per resource type)
HONEYCOMB_CACHE_MAX_SIZE=1000
Honeycomb MCP has been tested with the following clients:
It will likely work with other clients.
Access Honeycomb datasets using URIs in the format:
honeycomb://{environment}/{dataset}
For example:
honeycomb://production/api-requests
honeycomb://staging/backend-services
The resource response includes:
list_datasets
: List all datasets in an environment
{ "environment": "production" }
get_columns
: Get column information for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
run_query
: Run analytics queries with rich options
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{ "op": "COUNT" },
{ "op": "P95", "column": "duration_ms" }
],
"breakdowns": ["service.name"],
"time_range": 3600
}
analyze_columns
: Analyzes specific columns in a dataset by running statistical queries and returning computed metrics.
list_slos
: List all SLOs for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
get_slo
: Get detailed SLO information
{
"environment": "production",
"dataset": "api-requests",
"sloId": "abc123"
}
list_triggers
: List all triggers for a dataset
{
"environment": "production",
"dataset": "api-requests"
}
get_trigger
: Get detailed trigger information
{
"environment": "production",
"dataset": "api-requests",
"triggerId": "xyz789"
}
get_trace_link
: Generate a deep link to a specific trace in the Honeycomb UI
get_instrumentation_help
: Provides OpenTelemetry instrumentation guidance
{
"language": "python",
"filepath": "app/services/payment_processor.py"
}
Ask Claude things like:
All tool responses are optimized to reduce context window usage while maintaining essential information:
This optimization ensures that responses are concise but complete, allowing LLMs to process more data within context limitations.
run_query
The run_query
tool supports a comprehensive query specification:
calculations: Array of operations to perform
{"op": "HEATMAP", "column": "duration_ms"}
filters: Array of filter conditions
{"column": "error", "op": "=", "value": true}
filter_combination: "AND" or "OR" (default is "AND")
breakdowns: Array of columns to group results by
["service.name", "http.status_code"]
orders: Array specifying how to sort results
{"op": "COUNT", "order": "descending"}
time_range: Relative time range in seconds (e.g., 3600 for last hour)
start_time and end_time: UNIX timestamps for absolute time ranges
having: Filter results based on calculation values
{"calculate_op": "COUNT", "op": ">", "value": 100}
Here are some real-world example queries:
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"column": "duration_ms", "op": "HEATMAP"},
{"column": "duration_ms", "op": "MAX"}
],
"filters": [
{"column": "trace.parent_id", "op": "does-not-exist"}
],
"breakdowns": ["http.target", "name"],
"orders": [
{"column": "duration_ms", "op": "MAX", "order": "descending"}
]
}
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"column": "duration_ms", "op": "HEATMAP"}
],
"filters": [
{"column": "db.statement", "op": "exists"}
],
"breakdowns": ["db.statement"],
"time_range": 604800
}
{
"environment": "production",
"dataset": "api-requests",
"calculations": [
{"op": "COUNT"}
],
"filters": [
{"column": "exception.message", "op": "exists"},
{"column": "parent_name", "op": "exists"}
],
"breakdowns": ["exception.message", "parent_name"],
"orders": [
{"op": "COUNT", "order": "descending"}
]
}
pnpm install
pnpm run build
MIT
Interact with the Linode API to manage cloud resources.
AniList MCP server for accessing AniList API data
Get prescriptive CDK advice, explain CDK Nag rules, check suppressions, generate Bedrock Agent schemas, and discover AWS Solutions Constructs patterns.
Provides safe, read-only access to Kubernetes cluster resources for debugging and inspection.
Interact with your AWS environment using natural language to query and manage resources. Requires local AWS credentials.
Navigate your Aiven projects and interact with the PostgreSQL®, Apache Kafka®, ClickHouse® and OpenSearch® services
Connects Cloudglue to AI assistants, turning video collections into structured data for LLMs. Requires a Cloudglue API Key.
Backs up Cloudflare projects to a specified GitHub repository.
Official Hostinger API MCP server for services managment.
A remote MCP server deployable on Cloudflare Workers without authentication.