Easy web data access. Simplified retrieval of information from websites and online sources.
This repository provides a Model Context Protocol (MCP) server that connects LLMs and applications to Decodo's platform. The server facilitates integration between MCP-compatible clients and Decodo's services, streamlining access to our tools and capabilities.
Visit the decodo-mcp-server
page on
Smithery, select your preferred MCP client
and generate installation instructions.
You'll need the Decodo Advanced Web Scraping API credentials, which you can get by starting a free trial on the dashboard. Once you have a plan activated, take a note of your generated username and password:
git clone https://github.com/Decodo/decodo-mcp-server
npm install
npm run build
cd build/
pwd
Adding index.js
to the end of this directory, your build file location should look something like
this:
/Users/your.user/projects/decodo-mcp/build/index.js
Follow the guide here to find the setup file,
then update claude_desktop_config.json
to look like this:
{
"mcpServers": {
"decodo-mcp": {
"command": "node",
"args": ["/Users/your.user/projects/decodo-mcp/build/index.js"],
"env": {
"SCRAPER_API_USERNAME": "your_username",
"SCRAPER_API_PASSWORD": "your_password"
}
}
}
}
For installation instructions, see the Cursor documentation.
The server exposes the following tools:
Tool | Description | Example prompt |
---|---|---|
scrape_as_markdown | Scrapes any target URL, expects a URL to be given via prompt. Returns results in Markdown. | Scrape peacock.com from a US IP address and tell me the pricing. |
google_search_parsed | Scrapes Google Search for a given query, and returns parsed results. | Scrape Google Search for shoes and tell me the top position. |
amazon_search_parsed | Scrapes Amazon Search for a given query, and returns parsed results. | Scrape Amazon Search for toothbrushes |
The following parameters are inferred from user prompts:
Parameter | Description |
---|---|
jsRender | Renders target URL in a headless browser. |
geo | Sets the country from which the request will originate. |
locale | Sets the locale of the request. |
tokenLimit | Truncates the response content up to this limit. Useful if the context window is small. |
fullResponse | Skips automatic truncation and returns full content. If context window is small, may throw warnings. |
Query your AI agent with the following prompt:
Scrape peacock.com from a German IP address and tell me the pricing.
This prompt will say that peacock.com is geo-restricted. To bypass the geo-restriction:
Scrape peacock.com from a US IP address and tell me the pricing.
If your agent has a small context window, the content returned from scraping will be automatically truncated, in order to avoid context-overflow. You can increase the number of tokens returned within your prompt:
Scrape hacker news, return 50k tokens.
If your agent has a big context window, tell it to return full content
:
Scrape hacker news, return full content.
All code is released under the MIT License.
Fetches content from deepwiki.com and converts it into LLM-readable markdown.
Download webpages as markdown files using the r.jina.ai service, with configurable directories and persistent settings.
Playwright MCP server
Interact with WebScraping.AI for web data extraction and scraping.
A Go-based MCP server for interacting with the Lightpanda Browser using the Chrome DevTools Protocol (CDP).
Render website screenshots with ScreenshotOne
Discover, extract, and interact with the web - one interface powering automated access across the public internet.
Fast, token-efficient web content extraction that converts websites to clean Markdown. Features Mozilla Readability, smart caching, polite crawling with robots.txt support, and concurrent fetching with minimal dependencies.
A MCP server that provides comprehensive website snapshot capabilities using Playwright. This server enables LLMs to capture and analyze web pages through structured accessibility snapshots, network monitoring, and console message collection.
Leverage Notte Web AI agents & cloud browser sessions for scalable browser automation & scraping workflows