Limelight
Make your app's runtime context available to AI
Documentation Index
Fetch the complete documentation index at: https://docs.getlimelight.io/llms.txt Use this file to discover all available pages before exploring further.
MCP Server
Give your AI coding assistant runtime context from your running app
Overview
The Limelight MCP Server connects your running React or React Native app to AI coding assistants like Cursor, Claude Code, and any MCP-compatible editor.
Instead of copying logs into ChatGPT or hoping your AI can guess what's wrong from source code alone, Limelight streams live runtime data — renders, state changes, network requests, and console logs — directly into your editor's AI.
Ask your AI "why is my app slow?" and it answers with real data, not guesses.
The MCP server runs locally on your machine. No data leaves your system.Quickstart
```bash theme={null} claude mcp add limelight-mcp npx limelight-mcp ``` <Tab title="Cursor">
Add to your MCP settings:
```json theme={null}
{
"mcpServers": {
"limelight": {
"command": "npx",
"args": ["limelight-mcp"]
}
}
}
```
</Tab>
<Tab title="Other MCP Clients">
Any client that supports stdio MCP servers works. Use the command:
```
npx limelight-mcp
```
</Tab>
</Tabs>
```bash theme={null}
npm install @getlimelight/sdk
```
Initialize with the MCP target:
```typescript theme={null}
import { Limelight } from "@getlimelight/sdk";
Limelight.connect({
target: "mcp",
});
```
<Tip>
Add Zustand or Redux stores to capture state changes:
```typescript theme={null}
Limelight.connect({
target: "mcp",
stores: { authStore: useAuthStore, cartStore: useCartStore },
});
```
</Tip>
With your app running and your AI editor open, try:
* *"My app feels slow. Do you see any issues?"*
* *"Why is my search showing wrong results?"*
* *"Which components are re-rendering the most?"*
Your AI will call Limelight's tools automatically to inspect your app's runtime state and give you answers backed by real data.
What your AI can see
Once connected, your AI assistant has access to everything happening in your running app:
Which components are rendering, how often, how expensive, and why. Detects render loops, unnecessary re-renders, and unstable props. Zustand and Redux store contents, recent changes, and diffs. See exactly how state evolved over time. Every request and response with timing, status, headers, and bodies. Detects race conditions, waterfalls, and failed requests. All console output with levels, timestamps, and stack traces. Filtered and searchable.Tools reference
The MCP server exposes 11 tools that your AI calls automatically based on your questions. For detailed usage workflows and examples, see the Tools & Workflows guide.
Diagnostics
High-level snapshot of your running app. Event counts, errors, suspicious components, and detected patterns. **This is usually the first tool your AI calls.**Returns: total events by type, error/warning counts, top rendered components, suspicious items, and session metadata.
Proactive scan across all captured events. Runs Limelight's correlation engine and Debug IR pipeline on anything that looks problematic.
Detects: unnecessary re-renders, unstable props, render cascades, race conditions, N+1 queries, state thrashing, and more.
| Parameter | Type | Default | Description |
| ------------- | ------- | ------- | -------------------------------------- |
| `verbose` | boolean | false | Include causal summaries and event IDs |
| `limit` | number | 5 | Max issues to return |
| `deduplicate` | boolean | true | Group similar issues together |
Full root cause analysis on an error. Runs the Debug IR pipeline to produce a causal chain, state deltas, violations, and suggested fixes.
| Parameter | Type | Default | Description |
| --------------- | ------------------------ | --------------- | ---------------------------- |
| `error_id` | string | — | Specific event ID |
| `error_pattern` | string | — | Match against error messages |
| `scope` | `"most_recent" \| "all"` | `"most_recent"` | Which errors to analyze |
Querying
Filter and search captured network requests.| Parameter | Type | Default | Description |
| ----------------- | -------------- | ------- | ------------------------------- |
| `url_pattern` | string | — | URL substring or pattern |
| `method` | string | — | HTTP method filter |
| `status_range` | `{ min, max }` | — | Status code range |
| `min_duration_ms` | number | — | Slow request threshold |
| `include_bodies` | boolean | false | Include request/response bodies |
| `limit` | number | 10 | Max results |
Filter and search console events.
| Parameter | Type | Default | Description |
| ---------------------- | ------------------------------------------------- | ------- | ---------------------- |
| `level` | `"error" \| "warn" \| "log" \| "info" \| "debug"` | — | Log level filter |
| `message_pattern` | string | — | Search within messages |
| `include_stack_traces` | boolean | auto | Include stack traces |
| `limit` | number | 10 | Max results |
Chronological view of all events within a time range.
| Parameter | Type | Default | Description |
| ---------------- | -------------------------------- | ------- | ------------------ |
| `last_n_seconds` | number | 10 | Time window |
| `event_types` | array | all | Filter by type |
| `min_severity` | `"info" \| "warning" \| "error"` | — | Minimum importance |
Deep dives
Full analysis of a React component — render history, props driving re-renders, and correlated state/network activity.| Parameter | Type | Description |
| ---------------- | ------ | ------------------------ |
| `component_name` | string | Component to investigate |
Component render performance profiling. Shows render counts, costs, velocity, cause breakdown, and suspicious flags.
| Parameter | Type | Default | Description |
| ----------------- | ----------------------------------------------- | --------------- | ----------------------- |
| `component_name` | string | — | Filter to one component |
| `suspicious_only` | boolean | false | Only flagged components |
| `sort_by` | `"render_count" \| "render_cost" \| "velocity"` | `"render_cost"` | Sort order |
| `limit` | number | 10 | Max results |
Current state store contents and recent change history.
| Parameter | Type | Default | Description |
| ----------------- | ------- | ------- | ---------------------------- |
| `store_id` | string | — | Specific store |
| `path` | string | — | Dot-notation path into state |
| `include_history` | boolean | false | Include recent changes |
| `history_limit` | number | 10 | Number of recent changes |
Find everything related to a specific event using Limelight's correlation engine. Returns a timeline (before/concurrent/after) and a correlation graph with edge types and confidence scores.
| Parameter | Type | Default | Description |
| ---------- | ------- | ------- | ----------------------------------- |
| `event_id` | string | — | Event to correlate (required) |
| `verbose` | boolean | false | Full graph with all nodes and edges |
Retrieve the full details of a single event by its ID. Returns complete event data including bodies, headers, stack traces, state diffs, or render details depending on event type. Use this for cheap inspection after finding event IDs from other tools.
| Parameter | Type | Description |
| ---------- | ------ | ------------------------------------------ |
| `event_id` | string | The ID of the event to retrieve (required) |
Configuration
The MCP server accepts CLI arguments for customization:
npx limelight-mcp --port 9229 --max-events 10000 --verbose
| Flag | Default | Description |
|---|---|---|
--port | 9229 | WebSocket port for SDK connection |
--max-events | 10000 | Maximum events stored in memory |
--verbose | false | Enable verbose logging |
How it works
Your App (with SDK) → WebSocket → Limelight MCP Server → stdio → AI Editor
- The Limelight SDK captures runtime events in your app
- Events stream to the MCP server over a local WebSocket connection
- The MCP server runs correlation and analysis on the events
- Your AI assistant calls Limelight's tools via the MCP protocol
- Responses include structured, pre-analyzed debugging context — not raw logs
All data stays on your machine. The MCP server runs locally and communicates with your editor over stdio.
The MCP server stores events in memory. Data resets when the server restarts. Maximum capacity is configurable with `--max-events`.Supported frameworks
| Framework | Status |
|---|---|
| React Native | Supported |
| React (web) | Supported |
| Next.js | Supported |
| Node | Supported |
Supported state libraries
| Library | Status |
|---|---|
| Zustand | Supported |
| Redux | Supported |
| Jotai | Coming soon |
| MobX | Planned |
Máy chủ liên quan
Alpha Vantage MCP Server
nhà tài trợAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
SSH Server MCP
An MCP server that provides SSH-based remote management tools, acting as proxy
Unity-MCP
A bridge between the Unity game engine and AI assistants using the Model Context Protocol (MCP).
MCP Reticle
Reticle intercepts, visualizes, and profiles JSON-RPC traffic between your LLM and MCP servers in real-time, with zero latency overhead. Stop debugging blind. Start seeing everything.
Tickerr — Live AI Tool Status & API Pricing
Real-time status monitoring, uptime tracking, incident history, and API pricing for 42+ AI tools including ChatGPT, Claude, Gemini, Cursor, GitHub Copilot, Perplexity, DeepSeek, and Groq. No API key required. Data updated every 5 minutes from independent monitoring infrastructure.
VSCode MCP
Enables AI agents and assistants to interact with VSCode through the Model Context Protocol.
Docker MCP server
Manage Docker containers, volumes, and services using natural language commands.
FLUX Image Generator
Generate images using Black Forest Lab's FLUX model.
depwire
Code dependency graph and AI context engine. 10 MCP tools that give Claude, Cursor, and any MCP client full codebase context — impact analysis, dependency tracing, architecture summaries, and interactive arc diagram visualization. Supports TypeScript, JavaScript, Python, and Go.
MCP Client for Ollama
A Python client that connects local LLMs via Ollama to Model Context Protocol servers, enabling them to use tools.
Lingo.dev
Make your AI agent speak every language on the planet, using Lingo.dev Localization Engine.