Scrapling Fetch MCP
Fetches HTML and markdown from websites with anti-automation measures using Scrapling.
scrapling-fetch-mcp
Helps AI assistants fetch content from bot-protected websites. Uses Scrapling (patchright + curl-cffi) to bypass anti-automation measures, returning clean HTML or Markdown.
Optimized for low-volume retrieval of documentation and reference materials. Not designed for high-volume scraping or data harvesting.
Requirements: Python 3.10+, uv
Claude Code Skill
The easiest way to use this is as a Claude Code skill. Once installed, Claude will automatically fetch bot-protected URLs when you ask — no manual commands needed.
Install into your project (recommended — only loads in this project's context):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch .claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup .claude/skills/
rm -rf /tmp/scrapling-fetch-mcp
Or install for all projects (loads into context everywhere):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch ~/.claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup ~/.claude/skills/
rm -rf /tmp/scrapling-fetch-mcp
Then ask Claude to run /s-fetch-setup — it will install the tool and browser binaries (large download), then remove itself. After that, just ask naturally:
"Fetch the docs at https://example.com/api"
"Find all mentions of 'authentication' on that page"
"Get me the installation instructions from their homepage"
Claude Desktop (MCP Server)
If you've already run /s-fetch-setup, the tool is installed — skip to the config below.
Otherwise install first:
uv tool install git+https://github.com/cyberchitta/scrapling-fetch-mcp
uvx --from git+https://github.com/cyberchitta/scrapling-fetch-mcp scrapling install
Note: Browser installation downloads hundreds of MB and must complete before first use. If the server times out initially, wait a few minutes and try again.
Add this to your Claude Desktop MCP settings and restart:
MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"scrapling-fetch": {
"command": "uvx",
"args": ["scrapling-fetch-mcp"]
}
}
}
How It Works
Two tools, used automatically by Claude:
- Page fetching — retrieves complete pages with pagination support
- Pattern extraction — finds content matching a regex
Three protection levels, escalated automatically:
- basic — fast (1-2s), works for most sites
- stealth — moderate (3-8s), headless Chromium
- max-stealth — thorough (10s+), full browser fingerprint
Limitations
- Text content only (documentation, articles, references)
- Not for high-volume scraping or sites requiring authentication
- Performance varies by site complexity and protection level
License
Apache 2.0
Servidores relacionados
Bright Data
patrocinadorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Wayback Machine
Access the Internet Archive's Wayback Machine to retrieve archived web pages and check for available snapshots of URLs.
Scraper.is MCP
A powerful web scraping tool for AI assistants, powered by the Scraper.is API.
Browser Use
Automate browser tasks using the Browser Use API.
Context Scraper MCP Server
A server for web crawling and content extraction using the Crawl4AI library.
Crawl4AI RAG
Integrates web crawling and Retrieval-Augmented Generation (RAG) into AI agents and coding assistants.
Deepwiki
Fetches content from deepwiki.com and converts it into LLM-readable markdown.
Social APIS Hub
The unified API for social media data - built for developers and AI agents.
Outscraper
Extract data from Google Maps, including places and reviews, using the Outscraper API.
GeekNews MCP Server
Fetches and caches daily articles from GeekNews using web scraping.
Nefino
Access the Nefino renewable energy news API.