Scrapling Fetch MCP
Fetches HTML and markdown from websites with anti-automation measures using Scrapling.
scrapling-fetch-mcp
Helps AI assistants fetch content from bot-protected websites. Uses Scrapling (patchright + curl-cffi) to bypass anti-automation measures, returning clean HTML or Markdown.
Optimized for low-volume retrieval of documentation and reference materials. Not designed for high-volume scraping or data harvesting.
Requirements: Python 3.10+, uv
Claude Code Skill
The easiest way to use this is as a Claude Code skill. Once installed, Claude will automatically fetch bot-protected URLs when you ask — no manual commands needed.
Install into your project (recommended — only loads in this project's context):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch .claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup .claude/skills/
rm -rf /tmp/scrapling-fetch-mcp
Or install for all projects (loads into context everywhere):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch ~/.claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup ~/.claude/skills/
rm -rf /tmp/scrapling-fetch-mcp
Then ask Claude to run /s-fetch-setup — it will install the tool and browser binaries (large download), then remove itself. After that, just ask naturally:
"Fetch the docs at https://example.com/api"
"Find all mentions of 'authentication' on that page"
"Get me the installation instructions from their homepage"
Claude Desktop (MCP Server)
If you've already run /s-fetch-setup, the tool is installed — skip to the config below.
Otherwise install first:
uv tool install git+https://github.com/cyberchitta/scrapling-fetch-mcp
uvx --from git+https://github.com/cyberchitta/scrapling-fetch-mcp scrapling install
Note: Browser installation downloads hundreds of MB and must complete before first use. If the server times out initially, wait a few minutes and try again.
Add this to your Claude Desktop MCP settings and restart:
MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"scrapling-fetch": {
"command": "uvx",
"args": ["scrapling-fetch-mcp"]
}
}
}
How It Works
Two tools, used automatically by Claude:
- Page fetching — retrieves complete pages with pagination support
- Pattern extraction — finds content matching a regex
Three protection levels, escalated automatically:
- basic — fast (1-2s), works for most sites
- stealth — moderate (3-8s), headless Chromium
- max-stealth — thorough (10s+), full browser fingerprint
Limitations
- Text content only (documentation, articles, references)
- Not for high-volume scraping or sites requiring authentication
- Performance varies by site complexity and protection level
License
Apache 2.0
Servidores relacionados
Bright Data
patrocinadorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
ScrAPI MCP Server
A server for scraping web pages using the ScrAPI API.
deadlink-checker-mcp
Dead link checker MCP server - find broken links, redirects, and timeouts on any website.
Puppeteer
Provides browser automation using Puppeteer, enabling interaction with web pages, taking screenshots, and executing JavaScript.
MCP NPX Fetch
Fetch and transform web content into various formats like HTML, JSON, Markdown, or Plain Text.
CompanyScope MCP
Company intelligence in one tool call — funding, tech stack, employees, competitors, news from public APIs
MCP Video Download URL Parser
Download watermark-free videos from platforms like Douyin and TikTok.
Real Estate MCP Server
Property search and market analysis from Redfin with neighborhood insights
Dumpling AI MCP Server
Data scraping, conversion, and extraction tools from Dumpling AI.
yt-dlp
Download video and audio from YouTube and other platforms using the yt-dlp tool.
Outscraper
Extract data from Google Maps, including places and reviews, using the Outscraper API.