Webclaw

Web content extraction for LLM pipelines — clean markdown or structured JSON from any URL using browser-grade TLS fingerprinting, no headless browser required. CLI, REST API, and MCP server.

webclaw

The fastest web scraper for AI agents.
67% fewer tokens. Sub-millisecond extraction. Zero browser overhead.

Stars Version License npm installs

Discord X / Twitter Website Docs


Claude Code: web_fetch gets 403, webclaw extracts successfully
Claude Code's built-in web_fetch → 403 Forbidden. webclaw → clean markdown.


Your AI agent calls fetch() and gets a 403. Or 142KB of raw HTML that burns through your token budget. webclaw fixes both.

It extracts clean, structured content from any URL using Chrome-level TLS fingerprinting — no headless browser, no Selenium, no Puppeteer. Output is optimized for LLMs: 67% fewer tokens than raw HTML, with metadata, links, and images preserved.

                     Raw HTML                          webclaw
┌──────────────────────────────────┐    ┌──────────────────────────────────┐
│ <div class="ad-wrapper">         │    │ # Breaking: AI Breakthrough      │
│ <nav class="global-nav">         │    │                                  │
│ <script>window.__NEXT_DATA__     │    │ Researchers achieved 94%         │
│ ={...8KB of JSON...}</script>    │    │ accuracy on cross-domain         │
│ <div class="social-share">       │    │ reasoning benchmarks.            │
│ <button>Tweet</button>           │    │                                  │
│ <footer class="site-footer">     │    │ ## Key Findings                  │
│ <!-- 142,847 characters -->      │    │ - 3x faster inference            │
│                                  │    │ - Open-source weights            │
│         4,820 tokens             │    │         1,590 tokens             │
└──────────────────────────────────┘    └──────────────────────────────────┘

Get Started (30 seconds)

For AI agents (Claude, Cursor, Windsurf, VS Code)

npx create-webclaw

Auto-detects your AI tools, downloads the MCP server, and configures everything. One command.

Homebrew (macOS/Linux)

brew tap 0xMassi/webclaw
brew install webclaw

Prebuilt binaries

Download from GitHub Releases for macOS (arm64, x86_64) and Linux (x86_64, aarch64).

Cargo (from source)

cargo install --git https://github.com/0xMassi/webclaw.git webclaw-cli
cargo install --git https://github.com/0xMassi/webclaw.git webclaw-mcp

Docker

docker run --rm ghcr.io/0xmassi/webclaw https://example.com

Docker Compose (with Ollama for LLM features)

cp env.example .env
docker compose up -d

Why webclaw?

webclawFirecrawlTrafilaturaReadability
Extraction accuracy95.1%80.6%83.5%
Token efficiency-67%-55%-51%
Speed (100KB page)3.2ms~500ms18.4ms8.7ms
TLS fingerprintingYesNoNoNo
Self-hostedYesNoYesYes
MCP (Claude/Cursor)YesNoNoNo
No browser requiredYesNoYesYes
CostFree$$$$FreeFree

Choose webclaw if you want fast local extraction, LLM-optimized output, and native AI agent integration.


What it looks like

$ webclaw https://stripe.com -f llm

> URL: https://stripe.com
> Title: Stripe | Financial Infrastructure for the Internet
> Language: en
> Word count: 847

# Stripe | Financial Infrastructure for the Internet

Stripe is a suite of APIs powering online payment processing
and commerce solutions for internet businesses of all sizes.

## Products
- Payments — Accept payments online and in person
- Billing — Manage subscriptions and invoicing
- Connect — Build a marketplace or platform
...
$ webclaw https://github.com --brand

{
  "name": "GitHub",
  "colors": [{"hex": "#59636E", "usage": "Primary"}, ...],
  "fonts": ["Mona Sans", "ui-monospace"],
  "logos": [{"url": "https://github.githubassets.com/...", "kind": "svg"}]
}
$ webclaw https://docs.rust-lang.org --crawl --depth 2 --max-pages 50

Crawling... 50/50 pages extracted
---
# Page 1: https://docs.rust-lang.org/
...
# Page 2: https://docs.rust-lang.org/book/
...

MCP Server — 10 tools for AI agents

webclaw MCP server

webclaw ships as an MCP server that plugs into Claude Desktop, Claude Code, Cursor, Windsurf, OpenCode, Antigravity, Codex CLI, and any MCP-compatible client.

npx create-webclaw    # auto-detects and configures everything

Or manual setup — add to your Claude Desktop config:

{
  "mcpServers": {
    "webclaw": {
      "command": "~/.webclaw/webclaw-mcp"
    }
  }
}

Then in Claude: "Scrape the top 5 results for 'web scraping tools' and compare their pricing" — it just works.

Available tools

ToolDescriptionRequires API key?
scrapeExtract content from any URLNo
crawlRecursive site crawlNo
mapDiscover URLs from sitemapsNo
batchParallel multi-URL extractionNo
extractLLM-powered structured extractionNo (needs Ollama)
summarizePage summarizationNo (needs Ollama)
diffContent change detectionNo
brandBrand identity extractionNo
searchWeb search + scrape resultsYes
researchDeep multi-source researchYes

8 of 10 tools work locally — no account, no API key, fully private.


Features

Extraction

  • Readability scoring — multi-signal content detection (text density, semantic tags, link ratio)
  • Noise filtering — strips nav, footer, ads, modals, cookie banners (Tailwind-safe)
  • Data island extraction — catches React/Next.js JSON payloads, JSON-LD, hydration data
  • YouTube metadata — structured data from any YouTube video
  • PDF extraction — auto-detected via Content-Type
  • 5 output formats — markdown, text, JSON, LLM-optimized, HTML

Content control

webclaw URL --include "article, .content"       # CSS selector include
webclaw URL --exclude "nav, footer, .sidebar"    # CSS selector exclude
webclaw URL --only-main-content                  # Auto-detect main content

Crawling

webclaw URL --crawl --depth 3 --max-pages 100   # BFS same-origin crawl
webclaw URL --crawl --sitemap                    # Seed from sitemap
webclaw URL --map                                # Discover URLs only

LLM features (Ollama / OpenAI / Anthropic)

webclaw URL --summarize                          # Page summary
webclaw URL --extract-prompt "Get all prices"    # Natural language extraction
webclaw URL --extract-json '{"type":"object"}'   # Schema-enforced extraction

Change tracking

webclaw URL -f json > snap.json                  # Take snapshot
webclaw URL --diff-with snap.json                # Compare later

Brand extraction

webclaw URL --brand                              # Colors, fonts, logos, OG image

Proxy rotation

webclaw URL --proxy http://user:pass@host:port   # Single proxy
webclaw URLs --proxy-file proxies.txt            # Pool rotation

Benchmarks

All numbers from real tests on 50 diverse pages. See benchmarks/ for methodology and reproduction instructions.

Extraction quality

Accuracy      webclaw     ███████████████████ 95.1%
              readability ████████████████▋   83.5%
              trafilatura ████████████████    80.6%
              newspaper3k █████████████▎      66.4%

Noise removal webclaw     ███████████████████ 96.1%
              readability █████████████████▊  89.4%
              trafilatura ██████████████████▏ 91.2%
              newspaper3k ███████████████▎    76.8%

Speed (pure extraction, no network)

10KB page     webclaw     ██                   0.8ms
              readability █████                2.1ms
              trafilatura ██████████           4.3ms

100KB page    webclaw     ██                   3.2ms
              readability █████                8.7ms
              trafilatura ██████████           18.4ms

Token efficiency (feeding to Claude/GPT)

FormatTokensvs Raw HTML
Raw HTML4,820baseline
readability2,340-51%
trafilatura2,180-55%
webclaw llm1,590-67%

Crawl speed

ConcurrencywebclawCrawl4AIScrapy
59.8 pg/s5.2 pg/s7.1 pg/s
1018.4 pg/s8.7 pg/s12.3 pg/s
2032.1 pg/s14.2 pg/s21.8 pg/s

Architecture

webclaw/
  crates/
    webclaw-core     Pure extraction engine. Zero network deps. WASM-safe.
    webclaw-fetch    HTTP client + TLS fingerprinting (wreq/BoringSSL). Crawler. Batch ops.
    webclaw-llm      LLM provider chain (Ollama -> OpenAI -> Anthropic)
    webclaw-pdf      PDF text extraction
    webclaw-mcp      MCP server (10 tools for AI agents)
    webclaw-cli      CLI binary

webclaw-core takes raw HTML as a &str and returns structured output. No I/O, no network, no allocator tricks. Can compile to WASM.


Configuration

VariableDescription
WEBCLAW_API_KEYCloud API key (enables bot bypass, JS rendering, search, research)
OLLAMA_HOSTOllama URL for local LLM features (default: http://localhost:11434)
OPENAI_API_KEYOpenAI API key for LLM features
ANTHROPIC_API_KEYAnthropic API key for LLM features
WEBCLAW_PROXYSingle proxy URL
WEBCLAW_PROXY_FILEPath to proxy pool file

Cloud API (optional)

For bot-protected sites, JS rendering, and advanced features, webclaw offers a hosted API at webclaw.io.

The CLI and MCP server work locally first. Cloud is used as a fallback when:

  • A site has bot protection (Cloudflare, DataDome, WAF)
  • A page requires JavaScript rendering
  • You use search or research tools
export WEBCLAW_API_KEY=wc_your_key

# Automatic: tries local first, cloud on bot detection
webclaw https://protected-site.com

# Force cloud
webclaw --cloud https://spa-site.com

SDKs

npm install @webclaw/sdk                  # TypeScript/JavaScript
pip install webclaw                        # Python
go get github.com/0xMassi/webclaw-go      # Go

Use cases

  • AI agents — Give Claude/Cursor/GPT real-time web access via MCP
  • Research — Crawl documentation, competitor sites, news archives
  • Price monitoring — Track changes with --diff-with snapshots
  • Training data — Prepare web content for fine-tuning with token-optimized output
  • Content pipelines — Batch extract + summarize in CI/CD
  • Brand intelligence — Extract visual identity from any website

Community

  • Discord — questions, feedback, show what you built
  • GitHub Issues — bug reports and feature requests

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Acknowledgments

TLS and HTTP/2 browser fingerprinting is powered by wreq and http2 by @0x676e67, who pioneered browser-grade HTTP/2 fingerprinting in Rust.

License

AGPL-3.0

Máy chủ liên quan

NotebookLM Web Importer

Nhập trang web và video YouTube vào NotebookLM chỉ với một cú nhấp. Được tin dùng bởi hơn 200.000 người dùng.

Cài đặt tiện ích Chrome