scrape-do-mcp
MCP Server for Scrape.do - Web Scraping & Google Search with anti-bot bypass
scrape-do-mcp
δΈζζζ‘£ | English
An MCP server that wraps Scrape.do's documented APIs in one package: the main scraping API, Google Search API, Amazon Scraper API, Async API, and a Proxy Mode configuration helper.
Official docs: https://scrape.do/documentation/
Coverage
scrape_url: Main Scrape.do API with JS rendering, geo-targeting, session persistence, screenshots, ReturnJSON, browser interactions, cookies, and header forwarding.google_search: Structured Google SERP API withgoogle_domain,location,uule,lr,cr,safe,nfpr,filter, pagination, and optional raw HTML.amazon_product: Amazon PDP endpoint.amazon_offer_listing: Amazon offer listing endpoint.amazon_search: Amazon search/category endpoint.amazon_raw_html: Raw HTML Amazon endpoint with geo-targeting.async_create_job,async_get_job,async_get_task,async_list_jobs,async_cancel_job,async_get_account: Async API coverage with both MCP-friendly aliases and official field names.proxy_mode_config: Builds official Proxy Mode connection details, default parameter strings, and CA certificate references.
Compatibility Notes
scrape_urlsupports both MCP-friendly aliases and official parameter names:render_jsorrendersuper_proxyorsuperscreenshotorscreenShot
google_searchsupports:queryorqcountryorgllanguageorhldomainorgoogle_domainincludeHtmlorinclude_html
async_create_jobaccepts both alias fields liketargets,render,webhookUrland official Async API fields likeTargets,Render,WebhookURL.async_get_job,async_get_task, andasync_cancel_jobaccept bothjobId/taskIdand officialjobID/taskID.async_list_jobsaccepts bothpageSizeand officialpage_size.- For header forwarding in
scrape_url, passheadersplusheader_mode(custom,extra, orforward). - Screenshot responses preserve the official Scrape.do JSON body and also attach MCP image content when screenshots are present.
scrape_urlnow defaults tooutput="raw"to match the official API more closely.scrape_urlincludes response metadata instructuredContent, which helps surfacepureCookies,transparentResponse, and binary responses inside MCP.
Installation
Quick Install
claude mcp add-json scrape-do --scope user '{
"type": "stdio",
"command": "npx",
"args": ["-y", "scrape-do-mcp"],
"env": {
"SCRAPE_DO_TOKEN": "YOUR_TOKEN_HERE"
}
}'
Claude Desktop
Add this to ~/.claude.json:
{
"mcpServers": {
"scrape-do": {
"command": "npx",
"args": ["-y", "scrape-do-mcp"],
"env": {
"SCRAPE_DO_TOKEN": "YOUR_TOKEN_HERE"
}
}
}
}
Get your token at https://app.scrape.do
Available Tools
| Tool | Purpose |
|---|---|
scrape_url | Main Scrape.do scraping API wrapper |
google_search | Structured Google search results |
amazon_product | Amazon PDP structured data |
amazon_offer_listing | Amazon seller offers |
amazon_search | Amazon keyword/category results |
amazon_raw_html | Raw Amazon HTML with geo-targeting |
async_create_job | Create Async API jobs |
async_get_job | Fetch Async job details |
async_get_task | Fetch Async task details |
async_list_jobs | List Async jobs |
async_cancel_job | Cancel Async jobs |
async_get_account | Fetch Async account/concurrency info |
proxy_mode_config | Generate Proxy Mode configuration |
Example Prompts
Scrape https://example.com with render=true and wait for #app.
Search Google for "open source MCP servers" with google_domain=google.co.uk and lr=lang_en.
Get the Amazon PDP for ASIN B0C7BKZ883 in the US with zipcode 10001.
Create an async job for these 20 URLs and give me the job ID.
Development
npm install
npm run build
npm run dev
License
MIT
Related Servers
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
MCP Undetected Chromedriver
Automate Chrome browser control while bypassing anti-bot detection using undetected-chromedriver.
Selenix MCP
MCP server that bridges Claude Desktop (or any other local app supporting MCP) with Selenix for browser automation and testing. Enables creating, running, debugging, and managing browser automation tests through natural language.
Chrome MCP Server
Exposes Chrome browser functionality to AI assistants for automation, content analysis, and semantic search via a Chrome extension.
HTTP Requests
An MCP server for making HTTP requests, enabling LLMs to fetch and process web content.
ScrAPI MCP Server
A server for scraping web pages using the ScrAPI API.
Conduit
Headless browser with SHA-256 hash-chained audit trails and Ed25519-signed proof bundles. MCP server for AI agents.
Puppeteer Vision
Scrape webpages and convert them to markdown using Puppeteer. Features AI-driven interaction capabilities.
Monad MCP Magic Eden
Retrieve NFT data from the Monad testnet, including holder addresses, collection values, and top-selling collections.
Shufersal MCP Server
Automates shopping on the Shufersal website, enabling LLMs to search for products, create shopping lists, and manage the cart.
Google Flights
An MCP server to interact with Google Flights data for finding flight information.