Multi-Agent Monitoring LangFuse MCP Server
A Model Context Protocol (MCP) server for comprehensive monitoring and observability of multi-agent systems using Langfuse.
Monitoring and observability MCP Server
A Model Context Protocol (MCP) server for comprehensive monitoring and observability of systems using Langfuse.
🎯 What This Does
This MCP server allows you to:
- Monitor all your agents in real-time
- Track performance metrics (latency, cost, token usage)
- Debug failed executions with detailed traces
- Analyze agent performance across time periods
- Compare different agent versions via metadata filters
- Manage costs and set budget alerts
- Visualize agent workflows
Quick Start
1. Prerequisites
- Python 3.11 or higher
- A Langfuse account (sign up here)
- agents instrumented with Langfuse
2. Installation
# Install via pip
pip install -r requirements.txt
# Or install from source
git clone https://github.com/yourusername/langfuse-mcp-python.git
cd langfuse-mcp-python
pip install -e .
3. Configuration
Create a .env file with your Langfuse credentials:
cp .env.example .env
# Edit .env and add your credentials
Your .env should look like:
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxx
LANGFUSE_HOST=https://cloud.langfuse.com
4. Run As Streamable HTTP (URL)
If you want a Streamable HTTP URL that works across all tools, run the server with the Streamable HTTP transport:
python -m langfuse_mcp_python --transport streamable-http --host 127.0.0.1 --port 8000 --path /mcp
python -m langfuse_mcp_python --transport sse --host 127.0.0.1 --port 8000
You can then connect any Streamable HTTP-compatible MCP client to:
http://127.0.0.1:8000/mcp
If you are using Claude Desktop or Cursor, keep the default stdio transport in their configs.
4b. Set Up MCP Client
For Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"langfuse-monitor": {
"command": "uvx",
"args": ["--python", "3.11", "langfuse-mcp-python"],
"env": {
"LANGFUSE_PUBLIC_KEY": "pk-lf-xxxxx",
"LANGFUSE_SECRET_KEY": "sk-lf-xxxxx",
"LANGFUSE_HOST": "https://cloud.langfuse.com"
}
}
}
}
For Cursor
Add to .cursor/mcp.json:
{
"mcpServers": {
"langfuse-monitor": {
"command": "python",
"args": ["-m", "langfuse_mcp_python"],
"env": {
"LANGFUSE_PUBLIC_KEY": "pk-lf-xxxxx",
"LANGFUSE_SECRET_KEY": "sk-lf-xxxxx"
}
}
}
}
5. Instrument Your Agents
Make sure your agents send traces to Langfuse:
from langfuse.langchain import CallbackHandler
from langgraph.graph import StateGraph
# Create Langfuse callback handler
langfuse_handler = CallbackHandler(
public_key="pk-lf-xxxxx",
secret_key="sk-lf-xxxxx",
host="https://cloud.langfuse.com"
)
# Create your agent
workflow = StateGraph(AgentState)
workflow.add_node("planner", planner_node)
workflow.add_node("executor", executor_node)
app = workflow.compile()
# Run with Langfuse monitoring
result = app.invoke(
{"input": "user query"},
config={
"callbacks": [langfuse_handler],
"metadata": {
"agent_name": "my_planner_agent",
"version": "v1.0"
}
}
)
Project Structure
src/langfuse_mcp_python/server.pyCLI entrypoint and stdio transportsrc/langfuse_mcp_python/http_server.pyStreamable HTTP and SSE transportsrc/langfuse_mcp_python/utils/tool_registry.pyTool setup and registrationsrc/langfuse_mcp_python/tools/Tool implementations and specssrc/langfuse_mcp_python/integrations/langfuse_client.pyLangfuse API clientsrc/langfuse_mcp_python/core/base_tool.pyShared cache and metrics
Available Tools
Monitoring and Analytics
watch_agentsMonitor active agentsget_traceFetch a trace by IDanalyze_performanceAggregate performance over timeget_metricsAggregate metrics (latency, cost, tokens)
Scores and Evaluation
get_scoresFetch scoressubmit_scoreCreate a scoreget_score_configsList score configurations
Prompts
get_promptsList promptscreate_promptCreate a promptdelete_promptDelete a prompt
Sessions
get_sessionsList sessions
Datasets
get_datasetsList datasetscreate_datasetCreate a datasetcreate_dataset_itemAdd an item to a dataset
Models
get_modelsList modelscreate_modelCreate a modeldelete_modelDelete a model
Comments
get_commentsList commentsadd_commentAdd a comment
Traces
delete_traceDelete a trace
Annotation Queues
get_annotation_queuesList annotation queuescreate_annotation_queueCreate a queueget_queue_itemsList queue itemsresolve_queue_itemResolve a queue item
Blob Storage Integrations
get_blob_storage_integrationsList integrationsupsert_blob_storage_integrationCreate or update an integrationget_blob_storage_integration_statusFetch integration statusdelete_blob_storage_integrationDelete an integration
LLM Connections
get_llm_connectionsList connectionsupsert_llm_connectionCreate or update a connection
Projects
get_projectsList projectscreate_projectCreate a projectupdate_projectUpdate a projectdelete_projectDelete a project
Example: watch_agents
Monitor all active agents in real-time.
Example:
Show me all active agents from the last hour
Response:
Active Agent Monitoring (last_1h)
Total Traces Found: 15
Showing: Top 10 traces
1. research_agent (Trace: trace-abc12...)
- Status: completed
- Session: session-xyz
- Started: 2026-03-19T10:25:00Z
- Latency: 1250ms
- Tokens: 3420
- Cost: $0.0234
Advanced Usage
Filtering Agents
Watch only my research_agent and planner_agent from the last 24 hours
Performance Analysis
Analyze performance of my planner_agent over the last 24 hours
Cost Monitoring
Show cost breakdown by agent for the last week
Deep Debugging
Show trace details for trace-abc123
Architecture
MCP Client (Claude, Cursor, etc.)
-> Langfuse MCP Server (stdio/HTTP)
-> Langfuse API
-> Langfuse Platform
-> Your Langfuse Agents
Security Best Practices
- Never commit credentials - Use environment variables
- Rotate API keys regularly
- Use read-only keys where possible
- Enable rate limiting in production
- Mask sensitive data in traces
Example Monitoring Workflow
Daily Agent Health Check
- Check active agents:
watch_agents - Review performance:
analyze_performance - Check costs:
get_metrics - Investigate failures:
get_trace
Agent Optimization Cycle
- Establish baseline:
analyze_performancefor current version metadata - Deploy new version with different metadata
- Compare versions by running
analyze_performancewith version filters - Make data-driven deployment decisions
Cost Control
- Track costs:
get_metricsgrouped by agent - Identify expensive agents
- Optimize high-cost operations
- Track savings over time
Troubleshooting
MCP Server Not Connecting
- Check environment variables are set correctly
- Verify Langfuse API keys are valid
- Ensure Python 3.11+ is installed
- Check logs:
tail -f ~/.mcp/logs/langfuse-monitor.log
No Traces Found
- Verify agents are instrumented with Langfuse
- Check
langfuse_handleris passed to agent invocations - Ensure metadata includes
agent_name - Verify time window is appropriate
High Latency
- Reduce number of traces fetched (use filters)
- Enable caching:
CACHE_ENABLED=true - Use "minimal" depth for trace details
- Consider batch processing for large datasets
Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
License
MIT License - see LICENSE file for details
Acknowledgments
- Langfuse - Open-source LLM observability
- LangGraph - Agent framework
- Model Context Protocol - MCP specification
Roadmap
- Core monitoring tools
- Performance analysis
- Cost tracking
- Debugging utilities
- Real-time streaming updates
- Custom alert system
- Predictive analytics
- A/B testing support
- Multi-project support
- Export to data warehouses
Version: 1.0.0
Last Updated: March 23, 2026
Status: Production Ready
संबंधित सर्वर
MS-365 MCP Server
A containerized MCP server for Microsoft 365, featuring OAuth authentication and OpenTelemetry instrumentation for monitoring.
ActivityWatch MCP Server
An MCP server for ActivityWatch, allowing interaction with your personal time tracking data.
ProPresenter 7 MCP Server
ProPresenter 7 MCP Server
MD-PDF MCP Server
A server for converting Markdown files to PDF format. Requires pandoc and weasyprint.
Google MCP
A all-in-one Google Workspace MCP server
VAP media MCP
: MCP server for AI media generation (imagesflux, videosveo3.1, music suno v5, with deterministic cost control using reserve-burn-refund billing
Obsidian iCloud MCP
Access and manage Obsidian notes stored in iCloud Drive.
Feishu/Lark OpenAPI
Connect AI agents with the Feishu/Lark platform for automation, including document processing, conversation management, and calendar scheduling.
DeepWriter
Interact with the DeepWriter API, an AI-powered writing assistant.
Calculator
Performs a wide range of mathematical calculations, including basic arithmetic, advanced operations, trigonometry, and safe expression evaluation.