Skyvern
MCP Server to let Claude / your AI control the browser
Model Context Protocol (MCP)
Skyvern's MCP server implementation helps connect your AI Applications to the browser. This allows your AI applications to do things like: Fill out forms, download files, research information on the web, and more.
You can connect your MCP-enabled applications to Skyvern in two ways:
- Local Skyvern Server
- Use your favourite LLM to power Skyvern
- Skyvern Cloud
- Create an account at app.skyvern.com
- Get the API key from the settings page which will be used for setup
Quickstart
⚠️ REQUIREMENT: Skyvern only runs in Python 3.11 environment today ⚠️
-
Install Skyvern
pip install skyvern -
Configure Skyvern Run the setup wizard which will guide you through the configuration process. You can connect to either Skyvern Cloud or a local version of Skyvern.
skyvern init -
(Optional) Launch the Skyvern Server. Only required in local mode
skyvern run server
Examples
Skyvern allows Claude to look up the top Hackernews posts today
https://github.com/user-attachments/assets/0c10dd96-c6ff-4b99-ad99-f34a5afd04fe
Cursor looking up the top programming jobs in your area
https://github.com/user-attachments/assets/084c89c9-6229-4bac-adc9-6ad69b41327d
Ask Windsurf to do a form 5500 search and download some files
https://github.com/user-attachments/assets/70cfe310-24dc-431a-adde-e72691f198a7
Supported Applications
skyvern init helps configure the following applications for you:
- Cursor
- Windsurf
- Claude Desktop
- Your custom MCP App?
Use the following config if you want to set up Skyvern for any other MCP-enabled application
{
"mcpServers": {
"Skyvern": {
"env": {
"SKYVERN_BASE_URL": "https://api.skyvern.com", # "http://localhost:8000" if running locally
"SKYVERN_API_KEY": "YOUR_SKYVERN_API_KEY" # find the local SKYVERN_API_KEY in the .env file after running `skyvern init` or in your Skyvern Cloud console
},
"command": "PATH_TO_PYTHON",
"args": [
"-m",
"skyvern",
"run",
"mcp"
]
}
}
}
Glama Release Setup
Glama's "release" flow is different from publishing the package to PyPI or the official MCP Registry. For Glama, you need a runnable server container so Glama can boot the MCP server, inspect the tool schema, and publish an installable release in their directory.
Use the dedicated Dockerfile in this directory for that flow.
The root Dockerfile is for the full Skyvern app stack and
starts python -m skyvern.forge, which is the wrong runtime for an MCP-only
Glama release.
Recommended Glama setup:
- Claim the server in Glama. This repository already includes
glama.json, so authorized maintainers can claim theSkyvern-AI/skyvernentry. - In Glama's Dockerfile admin page, point the build to
Dockerfile.glama. - Keep the default command unless Glama explicitly asks for HTTP transport.
The image defaults to
python -m skyvern run mcpover stdio. - If you want the hosted Glama release to use Skyvern Cloud browser sessions,
add a real
SKYVERN_API_KEYsecret in Glama. Otherwise the container boots in local embedded mode, which is enough for inspection but not ideal for cloud-backed browser sessions. - Deploy, wait for inspection to pass, then use Glama's "Make Release" action in the server admin UI.
If you are also publishing to the official MCP Registry, treat that as a
separate step. The official registry uses package metadata and server.json;
Glama releases are container-based.
Máy chủ liên quan
Bright Data
nhà tài trợDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Hacker News
Fetches and parses stories from Hacker News, providing structured data for top, new, ask, show, and job posts.
Puppeteer
Provides browser automation using Puppeteer, enabling interaction with web pages, taking screenshots, and executing JavaScript.
MCP Go Colly Crawler
A web crawling framework that integrates the Model Context Protocol (MCP) with the Colly web scraping library.
Scrapezy
Turn websites into datasets with Scrapezy
comet-mcp
Connect Claude Code to Perplexity Comet browser for agentic web browsing, deep research, and real-time task monitoring
Xiaohongshu Search & Comment
An automated tool to search notes, retrieve content, and post comments on Xiaohongshu (RedBook) using Playwright.
Intelligent Crawl4AI Agent
An AI-powered web scraping system for high-volume automation and advanced data extraction strategies.
Anysite
Turn any website into an API
Read Website Fast
Fast, token-efficient web content extraction that converts websites to clean Markdown. Features Mozilla Readability, smart caching, polite crawling with robots.txt support, and concurrent fetching with minimal dependencies.
Playwright Server
A server for browser automation using the Playwright library.