Scrapling Fetch MCP
Fetches HTML and markdown from websites with anti-automation measures using Scrapling.
scrapling-fetch-mcp
Helps AI assistants fetch content from bot-protected websites. Uses Scrapling (patchright + curl-cffi) to bypass anti-automation measures, returning clean HTML or Markdown.
Optimized for low-volume retrieval of documentation and reference materials. Not designed for high-volume scraping or data harvesting.
Requirements: Python 3.10+, uv
Claude Code Skill
The easiest way to use this is as a Claude Code skill. Once installed, Claude will automatically fetch bot-protected URLs when you ask — no manual commands needed.
Install into your project (recommended — only loads in this project's context):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch .claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup .claude/skills/
rm -rf /tmp/scrapling-fetch-mcp
Or install for all projects (loads into context everywhere):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch ~/.claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup ~/.claude/skills/
rm -rf /tmp/scrapling-fetch-mcp
Then ask Claude to run /s-fetch-setup — it will install the tool and browser binaries (large download), then remove itself. After that, just ask naturally:
"Fetch the docs at https://example.com/api"
"Find all mentions of 'authentication' on that page"
"Get me the installation instructions from their homepage"
Claude Desktop (MCP Server)
If you've already run /s-fetch-setup, the tool is installed — skip to the config below.
Otherwise install first:
uv tool install git+https://github.com/cyberchitta/scrapling-fetch-mcp
uvx --from git+https://github.com/cyberchitta/scrapling-fetch-mcp scrapling install
Note: Browser installation downloads hundreds of MB and must complete before first use. If the server times out initially, wait a few minutes and try again.
Add this to your Claude Desktop MCP settings and restart:
MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"scrapling-fetch": {
"command": "uvx",
"args": ["scrapling-fetch-mcp"]
}
}
}
How It Works
Two tools, used automatically by Claude:
- Page fetching — retrieves complete pages with pagination support
- Pattern extraction — finds content matching a regex
Three protection levels, escalated automatically:
- basic — fast (1-2s), works for most sites
- stealth — moderate (3-8s), headless Chromium
- max-stealth — thorough (10s+), full browser fingerprint
Limitations
- Text content only (documentation, articles, references)
- Not for high-volume scraping or sites requiring authentication
- Performance varies by site complexity and protection level
License
Apache 2.0
관련 서버
Bright Data
스폰서Discover, extract, and interact with the web - one interface powering automated access across the public internet.
Booli MCP Server
Access Swedish real estate data from Booli.se through a GraphQL API.
Nefino
Access the Nefino renewable energy news API.
Tech Collector MCP
Collects and summarizes technical articles from sources like Qiita, Dev.to, NewsAPI, and Hacker News using the Gemini API.
Cloudflare Playwright
Control a browser for web automation tasks using Playwright on Cloudflare Workers.
Google Flights
An MCP server to interact with Google Flights data for finding flight information.
Open Crawler MCP Server
A web crawler and content extractor that supports multiple output formats like text, markdown, and JSON.
YouTube Video Summarizer MCP
Fetch and summarize YouTube videos by extracting titles, descriptions, and transcripts.
AI Shopping Assistant
A conversational AI shopping assistant for web-based product discovery and decision-making.
LinkedIn Profile Scraper
Fetches LinkedIn profile information using the Fresh LinkedIn Profile Data API.
Puppeteer
A server for browser automation using Puppeteer, enabling web scraping, screenshots, and JavaScript execution.