Official Zeplin server for AI-assisted UI development.
Connect AI agents like Cursor, Windsurf, and VS Code (w/ Copilot) to Zeplin. Using the MCP server, AI agents can tap into:
To start using the MCP server, you first need to configure your client (e.g. Cursor, VS Code, Windsurf, Claude Code). Most clients have an option to add a new MCP server. When prompted, enter the following command:
npx @zeplin/mcp-server@latest
In addition, you also need to provide your Zeplin access token using the ZEPLIN_ACCESS_TOKEN
environment variable.
For example, if you’re using Cursor, here’s how your MCP settings should look like:
{
"mcpServers": {
"zeplin": {
"command": "npx",
"args": ["@zeplin/mcp-server@latest"],
"env": {
"ZEPLIN_ACCESS_TOKEN": "<YOUR_ZEPLIN_PERSONAL_ACCESS_TOKEN>" // Replace with your actual token
}
}
}
}
The project includes several npm scripts to help with development:
# Run TypeScript compiler in watch mode for development
npm run dev
# Build the project for production
npm run build
# Run ESLint on source files
npm run lint
# Automatically fix ESLint issues where possible
npm run lint:fix
# Test the MCP server locally with the inspector tool
npm run inspect
To run npm run inspect
, create an .env
file first in the root directory:
ZEPLIN_ACCESS_TOKEN=<YOUR_ZEPLIN_PERSONAL_ACCESS_TOKEN>
This project uses ESLint to enforce code quality and consistency. The configuration is in eslint.config.js
. Key style guidelines include:
When contributing to this project, please ensure your code follows these guidelines by running npm run lint:fix
before submitting changes.
The quality and specificity of your prompts significantly impact the AI’s ability to generate accurate and useful code. These are not mandatory but will definitely increase the output quality.
When you need to implement a small update or addition to an existing screen or component based on a new Zeplin design version.
The latest design for the following screen includes a new addition: a Checkbox component has been added to the MenuItem component, here is the short url of the screen <zeplin short url of the screen, e.g., https://zpl.io/abc123X>. Focus on the MenuItem component.
The Checkbox component can be found under the path/to/your/checkbox/component directory.
The relevant screen file is located at path/to/your/screen/file.tsx.
The MenuItem component, which needs to be modified, is located at path/to/your/menuitem/component.
Please implement this new addition.
Why this is effective:
For implementing larger screens or features, it’s often best to build individual components first and then assemble them.
Implement this component: <zeplin short url of the first component, e.g., https://zpl.io/def456Y>. Use Zeplin for design specifications.
(AI generates the first component...)
Implement this other component: <zeplin short url of the second component, e.g., https://zpl.io/ghi789Z>. Use Zeplin for design specifications.
(AI generates the second component...)
...
Now, using the components you just implemented (and any other existing components), implement the following screen: <zeplin short url of the screen, e.g., https://zpl.io/jkl012A>. Use Zeplin for the screen layout and any direct elements.
Why this is effective:
When dealing with complex Zeplin screens or components with many variants and layers, the amount of design data fetched can sometimes be extensive. This can potentially exceed the context window limitations of the AI model you are using, leading to truncated information or less effective code generation. Here are several strategies to manage the amount of information sent to the model:
Limit screen variants (includeVariants: false
):
get_screen
tool, the model can be instructed to fetch only the specific screen version linked in the URL, rather than all its variants (e.g., different states, sizes, themes). This is done by setting the includeVariants
parameter to false
during the tool call.https://zpl.io/abc123X
. I only need the specific version linked, not all its variants.”
The AI agent, when calling get_screen
, should then ideally use includeVariants: false
.Focus on specific layers/components (targetLayerName
or targeted prompts):
targetLayerName
): The get_screen
tool has a targetLayerName
parameter. If the model can identify a specific layer name from your prompt (e.g., "the 'Submit Button'"), it can use this parameter. The server will then return data primarily for that layer and its children, rather than the entire screen's layer tree.targetLayerName
in the tool call, very specific prompts can guide the model to internally prioritize or summarize information related to the mentioned element.https://zpl.io/screenXYZ
. I need to implement its layout and text styles.”
If the AI uses get_screen
, it could populate targetLayerName: "UserProfileHeader"
.Iterative, component-first implementation:
get_component
or get_screen
call to a manageable size.Boost security in your dev lifecycle via SAST, SCA, Secrets & IaC scanning with Cycode.
Interact with the JFrog Platform API for repository management, build tracking, and release lifecycle management.
Turns any command-line interface (CLI) command into a simple StdIO-based MCP server.
MCP server empowers LLMs to interact with JSON files efficiently. With JSON MCP, you can split, merge, etc.
A server for CodeFuse-CGM, a graph-integrated large language model designed for repository-level software engineering tasks.
Execute any LLM-generated code in the YepCode secure and scalable sandbox environment and create your own MCP tools using JavaScript or Python, with full support for NPM and PyPI packages
Work on your code with JetBrains IDEs
Provides remote machine control capabilities, eliminating SSH overhead for token-efficient system operations.
Manages penetration testing reports and vulnerabilities via a REST API.
A specialized MCP gateway for LLM enhancement prompts and jailbreaks with dynamic schema adaptation. Provides prompts for different LLMs using an enum-based approach.