Limelight
Make your app's runtime context available to AI
Documentation Index
Fetch the complete documentation index at: https://docs.getlimelight.io/llms.txt Use this file to discover all available pages before exploring further.
MCP Server
Give your AI coding assistant runtime context from your running app
Overview
The Limelight MCP Server connects your running React or React Native app to AI coding assistants like Cursor, Claude Code, and any MCP-compatible editor.
Instead of copying logs into ChatGPT or hoping your AI can guess what's wrong from source code alone, Limelight streams live runtime data — renders, state changes, network requests, and console logs — directly into your editor's AI.
Ask your AI "why is my app slow?" and it answers with real data, not guesses.
The MCP server runs locally on your machine. No data leaves your system.Quickstart
```bash theme={null} claude mcp add limelight-mcp npx limelight-mcp ``` <Tab title="Cursor">
Add to your MCP settings:
```json theme={null}
{
"mcpServers": {
"limelight": {
"command": "npx",
"args": ["limelight-mcp"]
}
}
}
```
</Tab>
<Tab title="Other MCP Clients">
Any client that supports stdio MCP servers works. Use the command:
```
npx limelight-mcp
```
</Tab>
</Tabs>
```bash theme={null}
npm install @getlimelight/sdk
```
Initialize with the MCP target:
```typescript theme={null}
import { Limelight } from "@getlimelight/sdk";
Limelight.connect({
target: "mcp",
});
```
<Tip>
Add Zustand or Redux stores to capture state changes:
```typescript theme={null}
Limelight.connect({
target: "mcp",
stores: { authStore: useAuthStore, cartStore: useCartStore },
});
```
</Tip>
With your app running and your AI editor open, try:
* *"My app feels slow. Do you see any issues?"*
* *"Why is my search showing wrong results?"*
* *"Which components are re-rendering the most?"*
Your AI will call Limelight's tools automatically to inspect your app's runtime state and give you answers backed by real data.
What your AI can see
Once connected, your AI assistant has access to everything happening in your running app:
Which components are rendering, how often, how expensive, and why. Detects render loops, unnecessary re-renders, and unstable props. Zustand and Redux store contents, recent changes, and diffs. See exactly how state evolved over time. Every request and response with timing, status, headers, and bodies. Detects race conditions, waterfalls, and failed requests. All console output with levels, timestamps, and stack traces. Filtered and searchable.Tools reference
The MCP server exposes 11 tools that your AI calls automatically based on your questions. For detailed usage workflows and examples, see the Tools & Workflows guide.
Diagnostics
High-level snapshot of your running app. Event counts, errors, suspicious components, and detected patterns. **This is usually the first tool your AI calls.**Returns: total events by type, error/warning counts, top rendered components, suspicious items, and session metadata.
Proactive scan across all captured events. Runs Limelight's correlation engine and Debug IR pipeline on anything that looks problematic.
Detects: unnecessary re-renders, unstable props, render cascades, race conditions, N+1 queries, state thrashing, and more.
| Parameter | Type | Default | Description |
| ------------- | ------- | ------- | -------------------------------------- |
| `verbose` | boolean | false | Include causal summaries and event IDs |
| `limit` | number | 5 | Max issues to return |
| `deduplicate` | boolean | true | Group similar issues together |
Full root cause analysis on an error. Runs the Debug IR pipeline to produce a causal chain, state deltas, violations, and suggested fixes.
| Parameter | Type | Default | Description |
| --------------- | ------------------------ | --------------- | ---------------------------- |
| `error_id` | string | — | Specific event ID |
| `error_pattern` | string | — | Match against error messages |
| `scope` | `"most_recent" \| "all"` | `"most_recent"` | Which errors to analyze |
Querying
Filter and search captured network requests.| Parameter | Type | Default | Description |
| ----------------- | -------------- | ------- | ------------------------------- |
| `url_pattern` | string | — | URL substring or pattern |
| `method` | string | — | HTTP method filter |
| `status_range` | `{ min, max }` | — | Status code range |
| `min_duration_ms` | number | — | Slow request threshold |
| `include_bodies` | boolean | false | Include request/response bodies |
| `limit` | number | 10 | Max results |
Filter and search console events.
| Parameter | Type | Default | Description |
| ---------------------- | ------------------------------------------------- | ------- | ---------------------- |
| `level` | `"error" \| "warn" \| "log" \| "info" \| "debug"` | — | Log level filter |
| `message_pattern` | string | — | Search within messages |
| `include_stack_traces` | boolean | auto | Include stack traces |
| `limit` | number | 10 | Max results |
Chronological view of all events within a time range.
| Parameter | Type | Default | Description |
| ---------------- | -------------------------------- | ------- | ------------------ |
| `last_n_seconds` | number | 10 | Time window |
| `event_types` | array | all | Filter by type |
| `min_severity` | `"info" \| "warning" \| "error"` | — | Minimum importance |
Deep dives
Full analysis of a React component — render history, props driving re-renders, and correlated state/network activity.| Parameter | Type | Description |
| ---------------- | ------ | ------------------------ |
| `component_name` | string | Component to investigate |
Component render performance profiling. Shows render counts, costs, velocity, cause breakdown, and suspicious flags.
| Parameter | Type | Default | Description |
| ----------------- | ----------------------------------------------- | --------------- | ----------------------- |
| `component_name` | string | — | Filter to one component |
| `suspicious_only` | boolean | false | Only flagged components |
| `sort_by` | `"render_count" \| "render_cost" \| "velocity"` | `"render_cost"` | Sort order |
| `limit` | number | 10 | Max results |
Current state store contents and recent change history.
| Parameter | Type | Default | Description |
| ----------------- | ------- | ------- | ---------------------------- |
| `store_id` | string | — | Specific store |
| `path` | string | — | Dot-notation path into state |
| `include_history` | boolean | false | Include recent changes |
| `history_limit` | number | 10 | Number of recent changes |
Find everything related to a specific event using Limelight's correlation engine. Returns a timeline (before/concurrent/after) and a correlation graph with edge types and confidence scores.
| Parameter | Type | Default | Description |
| ---------- | ------- | ------- | ----------------------------------- |
| `event_id` | string | — | Event to correlate (required) |
| `verbose` | boolean | false | Full graph with all nodes and edges |
Retrieve the full details of a single event by its ID. Returns complete event data including bodies, headers, stack traces, state diffs, or render details depending on event type. Use this for cheap inspection after finding event IDs from other tools.
| Parameter | Type | Description |
| ---------- | ------ | ------------------------------------------ |
| `event_id` | string | The ID of the event to retrieve (required) |
Configuration
The MCP server accepts CLI arguments for customization:
npx limelight-mcp --port 9229 --max-events 10000 --verbose
| Flag | Default | Description |
|---|---|---|
--port | 9229 | WebSocket port for SDK connection |
--max-events | 10000 | Maximum events stored in memory |
--verbose | false | Enable verbose logging |
How it works
Your App (with SDK) → WebSocket → Limelight MCP Server → stdio → AI Editor
- The Limelight SDK captures runtime events in your app
- Events stream to the MCP server over a local WebSocket connection
- The MCP server runs correlation and analysis on the events
- Your AI assistant calls Limelight's tools via the MCP protocol
- Responses include structured, pre-analyzed debugging context — not raw logs
All data stays on your machine. The MCP server runs locally and communicates with your editor over stdio.
The MCP server stores events in memory. Data resets when the server restarts. Maximum capacity is configurable with `--max-events`.Supported frameworks
| Framework | Status |
|---|---|
| React Native | Supported |
| React (web) | Supported |
| Next.js | Supported |
| Node | Supported |
Supported state libraries
| Library | Status |
|---|---|
| Zustand | Supported |
| Redux | Supported |
| Jotai | Coming soon |
| MobX | Planned |
Serveurs connexes
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
MCP WordPress Post Server
Manage WordPress posts and upload images directly from file paths.
Tuteliq
AI-powered safety tools for detecting grooming, bullying, threats, and harmful interactions in conversations. The server integrates Tuteliq’s behavioral risk detection API via the Model Context Protocol (MCP), enabling AI assistants to analyze interaction patterns rather than relying on keyword moderation. Use cases include platform safety, chat moderation, child protection, and compliance with regulations such as the EU Digital Services Act (DSA), COPPA, and KOSA.
MCP Talk Demo Files
A collection of demo files for MCP servers and clients, illustrating various transport protocols and server capabilities using Python.
animotion-mcp
745+ CSS3 animations and 9,000+ real SVG icons for AI coding agents. 10 MCP tools. Zero-clone setup via npx.
microCMS
Manage content on the microCMS headless CMS using its content and management APIs.
Claude Code Bridge
A bridge server connecting Claude Desktop with the Claude Code agent API.
GrowthBook
Create and read feature flags, review experiments, generate flag types, search docs, and interact with GrowthBook's feature flagging and experimentation platform.
Code Summarizer
A command-line tool that summarizes code files in a directory using Gemini Flash 2.0.
AgentGrade
Check is your site agent friendly? Get a badge to prove it.
mcproc
Manage background processes for AI agents using the Model Context Protocol (MCP).