LangSmith MCP Server

An MCP server for fetching conversation history and prompts from the LangSmith observability platform.

πŸ¦œπŸ› οΈ LangSmith MCP Server

LangSmith MCP Hero

License: MIT Python 3.10

A production-ready Model Context Protocol (MCP) server that provides seamless integration with the LangSmith observability platform. This server enables language models to fetch conversation history, prompts, runs and traces, datasets, experiments, and billing usage from LangSmith.

πŸ“‹ Example Use Cases

The server enables powerful capabilities including:

  • πŸ’¬ Conversation History: "Fetch the history of my conversation from thread 'thread-123' in project 'my-chatbot'" (paginated by character budget)
  • πŸ“š Prompt Management: "Get all public prompts in my workspace" / "Pull the template for the 'legal-case-summarizer' prompt"
  • πŸ” Traces & Runs: "Fetch the latest 10 root runs from project 'alpha'" / "Get all runs for trace <uuid> (page 2 of 5)"
  • πŸ“Š Datasets: "List datasets of type chat" / "Read examples from dataset 'customer-support-qa'"
  • πŸ§ͺ Experiments: "List experiments for dataset 'my-eval-set' with latency and cost metrics"
  • πŸ“ˆ Billing: "Get billing usage for September 2025"

πŸš€ Quickstart

A hosted version of the LangSmith MCP Server is available over HTTP-streamable transport, so you can connect without running the server yourself:

  • URL: https://langsmith-mcp-server.onrender.com/mcp
  • Hosting: Render, built from this public repo using the project's Dockerfile.

Use it like any HTTP-streamable MCP server: point your client at the URL and send your LangSmith API key in the LANGSMITH-API-KEY header. No local install or Docker required.

Example (Cursor mcp.json):

{
  "mcpServers": {
    "LangSmith MCP (Hosted)": {
      "url": "https://langsmith-mcp-server.onrender.com/mcp",
      "headers": {
        "LANGSMITH-API-KEY": "lsv2_pt_your_api_key_here"
      }
    }
  }
}

Optional headers: LANGSMITH-WORKSPACE-ID, LANGSMITH-ENDPOINT (same as in the Docker Deployment section below).

Note: This deployed instance is intended for LangSmith Cloud. If you use a self-hosted LangSmith instance, run the server yourself and point it at your endpointβ€”see the Docker Deployment section below.

πŸ› οΈ Available Tools

The LangSmith MCP Server provides the following tools for integration with LangSmith.

πŸ’¬ Conversation & Threads

Tool NameDescription
get_thread_historyRetrieve message history for a conversation thread. Uses char-based pagination: pass page_number (1-based), and use returned total_pages to request more pages. Optional max_chars_per_page and preview_chars control page size and long-string truncation.

πŸ“š Prompt Management

Tool NameDescription
list_promptsFetch prompts from LangSmith with optional filtering by visibility (public/private) and limit.
get_prompt_by_nameGet a specific prompt by its exact name, returning the prompt details and template.
push_promptDocumentation-only: how to create and push prompts to LangSmith.

πŸ” Traces & Runs

Tool NameDescription
fetch_runsFetch LangSmith runs (traces, tools, chains, etc.) from one or more projects. Supports filters (run_type, error, is_root), FQL (filter, trace_filter, tree_filter), and ordering. When trace_id is set, returns char-based paginated pages; otherwise returns one batch up to limit. Always pass limit and page_number.
list_projectsList LangSmith projects with optional filtering by name, dataset, and detail level (simplified vs full).

πŸ“Š Datasets & Examples

Tool NameDescription
list_datasetsFetch datasets with filtering by ID, type, name, name substring, or metadata.
list_examplesFetch examples from a dataset by dataset ID/name or example IDs, with filter, metadata, splits, and optional as_of version.
read_datasetRead a single dataset by ID or name.
read_exampleRead a single example by ID, with optional as_of version.
create_datasetDocumentation-only: how to create datasets in LangSmith.
update_examplesDocumentation-only: how to update dataset examples in LangSmith.

πŸ§ͺ Experiments & Evaluations

Tool NameDescription
list_experimentsList experiment projects (reference projects) for a dataset. Requires reference_dataset_id or reference_dataset_name. Returns key metrics (latency, cost, feedback stats).
run_experimentDocumentation-only: how to run experiments and evaluations in LangSmith.

πŸ“ˆ Usage & Billing

Tool NameDescription
get_billing_usageFetch organization billing usage (e.g. trace counts) for a date range. Optional workspace filter; returns metrics with workspace names inline.

πŸ“„ Pagination (char-based)

Several tools use stateless, character-budget pagination so responses stay within a size limit and work well with LLM clients:

  • Where it’s used: get_thread_history and fetch_runs (when trace_id is set).
  • Parameters: You send page_number (1-based) on every request. Optional: max_chars_per_page (default 25000, cap 30000) and preview_chars (truncate long strings with "… (+N chars)").
  • Response: Each response includes page_number, total_pages, and the page payload (result for messages, runs for runs). To get more, call again with page_number = 2, then 3, up to total_pages.
  • Why it’s useful: Pages are built by JSON character count, not item count, so each page fits within a fixed size. No cursor or server-side stateβ€”just integer page numbers.

πŸ› οΈ Installation Options

πŸ“ General Prerequisites

  1. Install uv (a fast Python package installer and resolver):

    curl -LsSf https://astral.sh/uv/install.sh | sh
    
  2. Clone this repository and navigate to the project directory:

    git clone https://github.com/langchain-ai/langsmith-mcp-server.git
    cd langsmith-mcp-server
    

πŸ”Œ MCP Client Integration

Once you have the LangSmith MCP Server, you can integrate it with various MCP-compatible clients. You have two installation options:

πŸ“¦ From PyPI

  1. Install the package:

    uv run pip install --upgrade langsmith-mcp-server
    
  2. Add to your client MCP config:

    {
        "mcpServers": {
            "LangSmith API MCP Server": {
                "command": "/path/to/uvx",
                "args": [
                    "langsmith-mcp-server"
                ],
                "env": {
                    "LANGSMITH_API_KEY": "your_langsmith_api_key",
                    "LANGSMITH_WORKSPACE_ID": "your_workspace_id",
                    "LANGSMITH_ENDPOINT": "https://api.smith.langchain.com"
                }
            }
        }
    }
    

βš™οΈ From Source

Add the following configuration to your MCP client settings (run from the project root so the package is found):

{
    "mcpServers": {
        "LangSmith API MCP Server": {
            "command": "/path/to/uv",
            "args": [
                "--directory",
                "/path/to/langsmith-mcp-server",
                "run",
                "langsmith_mcp_server/server.py"
            ],
            "env": {
                "LANGSMITH_API_KEY": "your_langsmith_api_key",
                "LANGSMITH_WORKSPACE_ID": "your_workspace_id",
                "LANGSMITH_ENDPOINT": "https://api.smith.langchain.com"
            }
        }
    }
}

Replace the following placeholders:

  • /path/to/uv: The absolute path to your uv installation (e.g., /Users/username/.local/bin/uv). You can find it with which uv.
  • /path/to/langsmith-mcp-server: The absolute path to the project root (the directory containing pyproject.toml and langsmith_mcp_server/).
  • your_langsmith_api_key: Your LangSmith API key (required).
  • your_workspace_id: Your LangSmith workspace ID (optional, for API keys scoped to multiple workspaces).
  • https://api.smith.langchain.com: The LangSmith API endpoint (optional, defaults to the standard endpoint).

Example configuration (PyPI/uvx):

{
    "mcpServers": {
        "LangSmith API MCP Server": {
            "command": "/path/to/uvx",
            "args": ["langsmith-mcp-server"],
            "env": {
                "LANGSMITH_API_KEY": "lsv2_pt_your_key_here",
                "LANGSMITH_WORKSPACE_ID": "your_workspace_id",
                "LANGSMITH_ENDPOINT": "https://api.smith.langchain.com"
            }
        }
    }
}

Copy this configuration into Cursor β†’ MCP Settings (replace /path/to/uvx with the output of which uvx).

LangSmith Cursor Integration

πŸ”§ Headers (tool invocation)

When connecting over HTTP (e.g. streamable HTTP or a hosted MCP endpoint), the server uses headers for authentication and configuration. Your MCP client must send these with each request; no environment variables are required for tool invocation.

HeaderRequiredDescription
LANGSMITH-API-KEYβœ… YesYour LangSmith API key for tool calls (list prompts, fetch runs, etc.)
LANGSMITH-WORKSPACE-ID❌ NoWorkspace ID for API keys scoped to multiple workspaces
LANGSMITH-ENDPOINT❌ NoCustom API endpoint URL (for self-hosted or EU region)

Optional headers used only when server monitoring is enabled (for grouping traces by session):

HeaderDescription
mcp-session-idSession or thread id; stored in trace metadata as session_id
x-session-idFallback if mcp-session-id is not set
x-request-idFallback for request-scoped grouping

Stdio transport: When running the server over stdio (e.g. uvx langsmith-mcp-server), there are no headers. The server falls back to the environment variables LANGSMITH_API_KEY, LANGSMITH_WORKSPACE_ID, and LANGSMITH_ENDPOINT in the process environment so that tool invocation still works.


πŸ”§ Environment variables

Environment variables are not used for tool invocation when using HTTP (headers are). They are used for:

  1. Stdio transport – fallback for credentials when no headers exist (see above).
  2. Load tests – e.g. tests/load_test_sessions.py reads LANGSMITH_API_KEY from the environment (or a .env file at the project root).
  3. Optional server monitoring – tracing tool calls to a second LangSmith instance (see below).
VariableUsed forDescription
LANGSMITH_API_KEYStdio fallback, load testsLangSmith API key (when not provided via headers)
LANGSMITH_WORKSPACE_IDStdio fallbackWorkspace ID (optional)
LANGSMITH_ENDPOINTStdio fallbackCustom endpoint URL (optional)

Optional: Tool-call monitoring to a second LangSmith instance

You can log every MCP tool call (with inputs and outputs) to a separate LangSmith project for monitoring and analytics. Set these in your environment (e.g. in a .env file at the project root; the server loads .env via python-dotenv):

VariableRequiredDescription
LANGSMITH_MONITORING_API_KEYYes (to enable)API key for the LangSmith instance used for monitoring
LANGSMITH_MONITORING_ENDPOINTNoEndpoint URL (default: cloud)
LANGSMITH_MONITORING_WORKSPACE_IDNoWorkspace ID for the monitoring instance
LANGSMITH_MONITORING_PROJECTNoProject name for monitoring traces (default: mcp-server-monitoring)
LANGSMITH_TRACINGYes (to send traces)Set to true so traces are sent to LangSmith (custom instrumentation)

Each tool run is traced with run_type="tool" and a session_id in metadata (from the mcp-session-id, x-session-id, or x-request-id header when using HTTP, or generated per request).

If you use the hosted LangSmith MCP Server, anonymous usage data is sent to a separate LangSmith project so we can iterate and improve the product.

🐳 Docker Deployment (HTTP-Streamable)

The LangSmith MCP Server can be deployed as an HTTP server using Docker, enabling remote access via the HTTP-streamable protocol.

Building the Docker Image

docker build -t langsmith-mcp-server .

Running with Docker

docker run -p 8000:8000 langsmith-mcp-server

The API key is provided via the LANGSMITH-API-KEY header when connecting, so no environment variables are required for HTTP-streamable protocol.

Connecting with HTTP-Streamable Protocol

Once the Docker container is running, you can connect to it using the HTTP-streamable transport. The server accepts authentication via headers:

Required header:

  • LANGSMITH-API-KEY: Your LangSmith API key

Optional headers:

  • LANGSMITH-WORKSPACE-ID: Workspace ID for API keys scoped to multiple workspaces
  • LANGSMITH-ENDPOINT: Custom endpoint URL (for self-hosted or EU region)

Example client configuration:

from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client

headers = {
    "LANGSMITH-API-KEY": "lsv2_pt_your_api_key_here",
    # Optional:
    # "LANGSMITH-WORKSPACE-ID": "your_workspace_id",
    # "LANGSMITH-ENDPOINT": "https://api.smith.langchain.com",
}

async with streamablehttp_client("http://localhost:8000/mcp", headers=headers) as (read, write, _):
    async with ClientSession(read, write) as session:
        await session.initialize()
        # Use the session to call tools, list prompts, etc.

Cursor Integration

To add the LangSmith MCP Server to Cursor using HTTP-streamable protocol, add the following to your mcp.json configuration file:

{
  "mcpServers": {
    "HTTP-Streamable LangSmith MCP Server": {
      "url": "http://localhost:8000/mcp",
      "headers": {
        "LANGSMITH-API-KEY": "lsv2_pt_your_api_key_here"
      }
    }
  }
}

Optional headers:

{
  "mcpServers": {
    "HTTP-Streamable LangSmith MCP Server": {
      "url": "http://localhost:8000/mcp",
      "headers": {
        "LANGSMITH-API-KEY": "lsv2_pt_your_api_key_here",
        "LANGSMITH-WORKSPACE-ID": "your_workspace_id",
        "LANGSMITH-ENDPOINT": "https://api.smith.langchain.com"
      }
    }
  }
}

Make sure the server is running before connecting Cursor to it.

Health Check

The server provides a health check endpoint:

curl http://localhost:8000/health

This endpoint does not require authentication and returns "LangSmith MCP server is running" when the server is healthy.

πŸ§ͺ Development and Contributing

Prerequisites

  • Python 3.10+ (3.11+ recommended)
  • uv – install with curl -LsSf https://astral.sh/uv/install.sh | sh
  • LangSmith API key – from smith.langchain.com
  • Node.js (optional) – only if you want to use MCP Inspector to test the server (stdio or streamable-http)

Setup

git clone https://github.com/langchain-ai/langsmith-mcp-server.git
cd langsmith-mcp-server

uv sync                    # Install dependencies
uv sync --group test       # Include test dependencies (pytest, ruff, mypy)

uvx langsmith-mcp-server   # Verify CLI runs (stdio)

Development workflow

  1. Edit code in langsmith_mcp_server/ or tests/.
  2. Format and lint (required before committing):
    make format
    make lint
    
  3. Run tests:
    make test
    # Or a single file:
    make test TEST_FILE=tests/tools/test_dataset_tools.py
    
  4. Type-check (optional): uv run mypy langsmith_mcp_server/

Testing with MCP Inspector

You can test the server with MCP Inspector using either stdio or streamable-http.

  1. Start MCP Inspector:

    npx @modelcontextprotocol/inspector@latest
    

    Open http://localhost:6274 in your browser.

  2. Connect in the Inspector:

    • Stdio: Choose stdio transport and configure the server command (e.g. uv run langsmith-mcp-server) and set LANGSMITH_API_KEY in the environment.
    • Streamable HTTP: Start the server first (uv run uvicorn langsmith_mcp_server.server:app --host 0.0.0.0 --port 8000 or Docker), then choose streamable-http, URL http://localhost:8000/mcp, and add header LANGSMITH-API-KEY = your API key.

Load testing

A session-based load test opens many MCP sessions and calls the list_prompts tool in each, using langchain-mcp-adapters. Run from the CLI (no UI). The server must be running first.

uv sync --group load
# Terminal 1: start the server
uv run uvicorn langsmith_mcp_server.server:app --host 0.0.0.0 --port 8000
# Terminal 2: run the load test
uv run python tests/load_test_sessions.py --sessions 20 --calls-per-session 3

Options

OptionDefaultDescription
--urlhttp://localhost:8000/mcpMCP endpoint URL
--api-keyfrom .envLANGSMITH_API_KEY (or set in project root .env)
--sessions10Number of concurrent sessions
--calls-per-session3list_prompts calls per session
--debugoffPrint step-by-step logs and first error traceback
--report PATHβ€”Write a report after the run (see below)

Report

Use --report PATH to write a JSON report after the test (e.g. --report load_test_report creates load_test_report.json with config, summary, per-session results, and first error).

uv run python tests/load_test_sessions.py --sessions 5 --report load_test_report
# Creates: load_test_report.json (in current directory)

Contributing checklist

Before opening a PR:

  • make format and make lint pass
  • make test passes
  • New tools or behavior are documented (e.g. in CLAUDE.md if you change architecture or tools)
  • Error handling in tools returns {"error": "..."} rather than raising

For more detail (adding tools, code standards, troubleshooting), see CLAUDE.md.

πŸ“„ License

This project is distributed under the MIT License. For detailed terms and conditions, please refer to the LICENSE file.

Made with ❀️ by the LangChain Team

Related Servers