Zeplin
Official Zeplin server for AI-assisted UI development.
Zeplin MCP server: AI-assisted UI development
Connect AI agents like Cursor, Windsurf, and VS Code (w/ Copilot) to Zeplin. Using the MCP server, AI agents can tap into:
- Component and screen specs: Detailed specs and assets for both components and entire screens — helping agents generate UI code that closely matches the designs.
- Documentation: Annotations added to screens that provide extra context, like how things should behave or tips for implementation — letting the agent go beyond static visuals and build real interactions.
- Design tokens: Colors, typography, spacing, and other design variables used across the project, so your agent can reuse existing tokens where possible.
Table of contents
- Prerequisites
- Installation
- Configuration
- Development
- Usage with MCP Clients (e.g., Cursor)
- Crafting Effective Prompts
Prerequisites
- Node.js (v20 or later)
- A Zeplin account.
- A Zeplin personal access token. You can generate one from your Zeplin profile, under "Developer" > "Personal access tokens".
Installation
One-click installation
For Cursor users:
For VS Code users:
Manual installation
To start using the MCP server, you first need to configure your client (e.g. Cursor, VS Code, Windsurf, Claude Code). Most clients have an option to add a new MCP server. When prompted, enter the following command:
npx @zeplin/mcp-server@latest
In addition, you also need to provide your Zeplin access token using the ZEPLIN_ACCESS_TOKEN environment variable.
For example, if you’re using Cursor, here’s how your MCP settings should look like:
{
"mcpServers": {
"zeplin": {
"command": "npx",
"args": ["@zeplin/mcp-server@latest"],
"env": {
"ZEPLIN_ACCESS_TOKEN": "<YOUR_ZEPLIN_PERSONAL_ACCESS_TOKEN>" // Replace with your actual token
}
}
}
}
Development
The project includes several npm scripts to help with development:
# Run TypeScript compiler in watch mode for development
npm run dev
# Build the project for production
npm run build
# Run ESLint on source files
npm run lint
# Automatically fix ESLint issues where possible
npm run lint:fix
# Test the MCP server locally with the inspector tool
npm run inspect
To run npm run inspect, create an .env file first in the root directory:
ZEPLIN_ACCESS_TOKEN=<YOUR_ZEPLIN_PERSONAL_ACCESS_TOKEN>
Code style and linting
This project uses ESLint to enforce code quality and consistency. The configuration is in eslint.config.js. Key style guidelines include:
- 2 space indentation
- Double quotes for strings
- Semicolons required
- No trailing spaces
- Organized imports
When contributing to this project, please ensure your code follows these guidelines by running npm run lint:fix before submitting changes.
Crafting effective prompts
The quality and specificity of your prompts significantly impact the AI’s ability to generate accurate and useful code. These are not mandatory but will definitely increase the output quality.
Example prompt 1: Minor changes/additions
When you need to implement a small update or addition to an existing screen or component based on a new Zeplin design version.
The latest design for the following screen includes a new addition: a Checkbox component has been added to the MenuItem component, here is the short url of the screen <zeplin short url of the screen, e.g., https://zpl.io/abc123X>. Focus on the MenuItem component.
The Checkbox component can be found under the path/to/your/checkbox/component directory.
The relevant screen file is located at path/to/your/screen/file.tsx.
The MenuItem component, which needs to be modified, is located at path/to/your/menuitem/component.
Please implement this new addition.
Why this is effective:
- Contextualizes the change: Clearly states what’s new.
- Provides the Zeplin link: Allows the MCP server to fetch the latest design data.
- Gives file paths: Helps the AI locate existing code to modify.
- Specifies components involved: Narrows down the scope of work.
Example prompt 2: Larger designs (Component-first)
For implementing larger screens or features, it’s often best to build individual components first and then assemble them.
Implement this component: <zeplin short url of the first component, e.g., https://zpl.io/def456Y>. Use Zeplin for design specifications.
(AI generates the first component...)
Implement this other component: <zeplin short url of the second component, e.g., https://zpl.io/ghi789Z>. Use Zeplin for design specifications.
(AI generates the second component...)
...
Now, using the components you just implemented (and any other existing components), implement the following screen: <zeplin short url of the screen, e.g., https://zpl.io/jkl012A>. Use Zeplin for the screen layout and any direct elements.
Why this is effective:
- Breaks down complexity: Tackles smaller, manageable pieces first.
- Iterative approach: Allows for review and correction at each step.
- Builds on previous work: The AI can use the components it just created.
- Clear Zeplin references: Ensures each piece is based on the correct design.
Strategies to deal with context window limitations
When dealing with complex Zeplin screens or components with many variants and layers, the amount of design data fetched can sometimes be extensive. This can potentially exceed the context window limitations of the AI model you are using, leading to truncated information or less effective code generation. Here are several strategies to manage the amount of information sent to the model:
-
Limit screen variants (
includeVariants: false):- How it works: When using the
get_screentool, the model can be instructed to fetch only the specific screen version linked in the URL, rather than all its variants (e.g., different states, sizes, themes). This is done by setting theincludeVariantsparameter tofalseduring the tool call. - When to use: If your prompt is focused on a single specific version of a screen, or if the variants are not immediately relevant to the task at hand. This significantly reduces the amount of data related to variant properties and their respective layer structures.
- Example prompt: “Implement the login form from this screen:
https://zpl.io/abc123X. I only need the specific version linked, not all its variants.” The AI agent, when callingget_screen, should then ideally useincludeVariants: false.
- How it works: When using the
-
Focus on specific layers/components (
targetLayerNameor targeted prompts):- How it works (using
targetLayerName): Theget_screentool has atargetLayerNameparameter. If the model can identify a specific layer name from your prompt (e.g., "the 'Submit Button'"), it can use this parameter. The server will then return data primarily for that layer and its children, rather than the entire screen's layer tree. - How it works (targeted prompts): Even without explicitly using
targetLayerNamein the tool call, very specific prompts can guide the model to internally prioritize or summarize information related to the mentioned element. - When to use: When your task involves a specific part of a larger screen, like a single button, an icon, or a text block.
- Example prompt: “Focus on the 'UserProfileHeader' component within this screen:
https://zpl.io/screenXYZ. I need to implement its layout and text styles.” If the AI usesget_screen, it could populatetargetLayerName: "UserProfileHeader".
- How it works (using
-
Iterative, component-first implementation:
- How it works: As detailed in Example prompt 2: Larger designs (Component-first), break down the implementation of a complex screen into smaller, component-sized tasks.
- When to use: For any non-trivial screen. This approach naturally limits the scope of each
get_componentorget_screencall to a manageable size. - Benefit: Each request to the Zeplin MCP server will fetch a smaller, more focused dataset, making it easier to stay within context limits and allowing the model to concentrate on one piece at a time.
เซิร์ฟเวอร์ที่เกี่ยวข้อง
Scout Monitoring MCP
ผู้สนับสนุนPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
ผู้สนับสนุนAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
MCP Chaos Rig
A local MCP server that breaks on demand. Test your client against auth failures, disappearing tools, flaky responses, and token expiry, all from a web UI.
Azure DevOps
Interact with Azure DevOps Work Items using the Azure DevOps REST API.
MCPfinder
A Node.js server for AI agents to discover, install, and manage new capabilities on demand via the MCP protocol.
Kestra Python MCP Server
A Python implementation of a Model Context Protocol server for interacting with Kestra.
Simple MCP Server
A starter MCP server built with TypeScript and the official MCP SDK, featuring example tools like echo, add, time, and flight status.
Unreal-Blender MCP
A unified server to control Blender and Unreal Engine via AI agents.
cratesio-mcp
MCP server for querying crates.io - the Rust package registry
Replicate Imagen 4 MCP Server
Access Google's Imagen 4 Ultra model via the Replicate platform for high-quality image generation.
Context 7
Up-to-date Docs For Any Cursor Prompt
MCP Server
A framework for AI-powered command execution and a plugin-based tool system. It can be run as a standalone service or embedded in other projects to expose a consistent API for invoking tools and managing tasks.