Scrapling Fetch MCP
Fetches HTML and markdown from websites with anti-automation measures using Scrapling.
scrapling-fetch-mcp
Helps AI assistants fetch content from bot-protected websites. Uses Scrapling (patchright + curl-cffi) to bypass anti-automation measures, returning clean HTML or Markdown.
Optimized for low-volume retrieval of documentation and reference materials. Not designed for high-volume scraping or data harvesting.
Requirements: Python 3.10+, uv
Claude Code Skill
The easiest way to use this is as a Claude Code skill. Once installed, Claude will automatically fetch bot-protected URLs when you ask — no manual commands needed.
Install into your project (recommended — only loads in this project's context):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch .claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup .claude/skills/
rm -rf /tmp/scrapling-fetch-mcp
Or install for all projects (loads into context everywhere):
git clone --depth=1 https://github.com/cyberchitta/scrapling-fetch-mcp /tmp/scrapling-fetch-mcp
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch ~/.claude/skills/
cp -r /tmp/scrapling-fetch-mcp/skills/s-fetch-setup ~/.claude/skills/
rm -rf /tmp/scrapling-fetch-mcp
Then ask Claude to run /s-fetch-setup — it will install the tool and browser binaries (large download), then remove itself. After that, just ask naturally:
"Fetch the docs at https://example.com/api"
"Find all mentions of 'authentication' on that page"
"Get me the installation instructions from their homepage"
Claude Desktop (MCP Server)
If you've already run /s-fetch-setup, the tool is installed — skip to the config below.
Otherwise install first:
uv tool install git+https://github.com/cyberchitta/scrapling-fetch-mcp
uvx --from git+https://github.com/cyberchitta/scrapling-fetch-mcp scrapling install
Note: Browser installation downloads hundreds of MB and must complete before first use. If the server times out initially, wait a few minutes and try again.
Add this to your Claude Desktop MCP settings and restart:
MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"scrapling-fetch": {
"command": "uvx",
"args": ["scrapling-fetch-mcp"]
}
}
}
How It Works
Two tools, used automatically by Claude:
- Page fetching — retrieves complete pages with pagination support
- Pattern extraction — finds content matching a regex
Three protection levels, escalated automatically:
- basic — fast (1-2s), works for most sites
- stealth — moderate (3-8s), headless Chromium
- max-stealth — thorough (10s+), full browser fingerprint
Limitations
- Text content only (documentation, articles, references)
- Not for high-volume scraping or sites requiring authentication
- Performance varies by site complexity and protection level
License
Apache 2.0
関連サーバー
Bright Data
スポンサーDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
PlayMCP Browser Automation Server
A server for browser automation using Playwright, providing powerful tools for web scraping, testing, and automation.
YouTube Data
Access YouTube video data and transcripts using the YouTube Data API.
CrawlForge MCP
CrawlForge MCP is a production-ready MCP server with 18 web scraping tools for AI agents. It gives Claude, Cursor, and any MCP-compatible client the ability to fetch URLs, extract structured data with CSS/XPath selectors, run deep multi-step research, bypass anti-bot detection with TLS fingerprint randomization, process documents, monitor page changes, and more. Credit-based pricing with a free tier (1,000 credits/month, no credit card required).
Extract Developer & LLM Docs
Extract documentation for AI agents from any site with llms.txt support. Features MCP server, REST API, batch processing, and multiple export formats.
Intercept
Give your AI the ability to read the web. Fetches URLs as clean markdown with 9 fallback strategies. Handles tweets, YouTube, arXiv, PDFs, and regular pages.
VLR MCP
MCP server for accessing VLR.gg VALORANT esports data
Postman V2
An MCP server that provides access to Postman using V2 api version.
Yahoo Finance
Interact with Yahoo Finance to get stock data, market news, and financial information using the yfinance Python library.
deadlink-checker-mcp
Dead link checker MCP server - find broken links, redirects, and timeouts on any website.
anybrowse
Convert any URL to LLM-ready Markdown via real Chrome browsers. 3 tools: scrape, crawl, search. Free via MCP, pay-per-use via x402.