Crawl4AI
Web scraping skill for Claude AI. Crawl websites, extract structured data with CSS/LLM strategies, handle dynamic JavaScript content. Built on crawl4ai with complete SDK reference, example scripts, and tests.
Crawl4AI Claude Skill
A comprehensive Claude skill for web crawling and data extraction using Crawl4AI. This skill enables Claude to scrape websites, extract structured data, handle JavaScript-heavy pages, crawl multiple URLs, and build automated web data pipelines.
Features
- Web Crawling: Extract content from any website with full JavaScript support
- Data Extraction: Schema-based CSS extraction (LLM-free) and LLM-based extraction
- Markdown Generation: Clean, well-formatted markdown output optimized for LLM consumption
- Content Filtering: Relevance-based filtering using BM25 and quality-based pruning
- Session Management: Persistent sessions for authenticated crawling
- Batch Processing: Concurrent multi-URL crawling
- CLI & SDK: Both command-line interface and Python SDK support
Installation
Method 1: Import as ZIP (Recommended for Claude Desktop)
-
Download or clone this repository
-
Create a ZIP file of the
crawl4aidirectory:cd crawl4ai-skill zip -r crawl4ai.zip crawl4ai/ -
In Claude Desktop, go to Settings → Developer → Import Skill
-
Select the
crawl4ai.zipfile
Method 2: Git Clone
git clone https://github.com/brettdavies/crawl4ai-skill.git
cd crawl4ai-skill
Then add the skill directory to Claude's skills folder or import via Claude Desktop.
Prerequisites
This skill requires the Crawl4AI Python library:
pip install crawl4ai
crawl4ai-setup
# Verify installation
crawl4ai-doctor
Quick Start
CLI Usage (Recommended for Quick Tasks)
# Basic crawling - returns markdown
crwl https://example.com
# Get markdown output
crwl https://example.com -o markdown
# JSON output with cache bypass
crwl https://example.com -o json -v --bypass-cache
Python SDK Usage
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://example.com")
print(result.markdown[:500])
asyncio.run(main())
Documentation
- SKILL.md - Complete skill documentation with examples
- CLI Guide - Command-line interface reference
- SDK Guide - Python SDK quick reference
- Complete SDK Reference - Full API documentation (5900+ lines)
Common Use Cases
Documentation to Markdown
crwl https://docs.example.com -o markdown > docs.md
E-commerce Product Monitoring
# Generate schema once (uses LLM)
python crawl4ai/scripts/extraction_pipeline.py --generate-schema https://shop.com "extract products"
# Use schema for extraction (no LLM costs)
crwl https://shop.com -e extract_css.yml -s product_schema.json -o json
News Aggregation
# Multiple sources with filtering
for url in news1.com news2.com news3.com; do
crwl "https://$url" -f filter_bm25.yml -o markdown-fit
done
Scripts
The skill includes helper scripts in crawl4ai/scripts/:
- basic_crawler.py - Simple markdown extraction
- batch_crawler.py - Multi-URL processing
- extraction_pipeline.py - Schema generation and extraction
Testing
Run the test suite to verify the skill works correctly:
cd crawl4ai/tests
python run_all_tests.py
Marketplace
This skill is available on Claude Skills marketplaces:
License
MIT License - see LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
For issues, questions, or feature requests, please open an issue on the GitHub repository.
Changelog
See CHANGELOG.md for version history and updates.
Serveurs connexes
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Haunt API
Extract clean, structured data from any URL — directly from Claude, Cursor, or any MCP-compatible AI.
Opengraph.io
Opengraph data, web scraping, screenshot features in a handy MCP tool
Puppeteer
Browser automation using Puppeteer, with support for local, Docker, and Cloudflare Workers deployments.
TradingView Chart Image Scraper
Fetches TradingView chart images for a given ticker and interval.
Cloudflare Playwright
Control a browser for web automation tasks using Playwright on Cloudflare Workers.
Feed
A server for fetching and parsing RSS, Atom, and JSON feeds.
Google-DeepSearch-AI-Mode
https://github.com/mottysisam/deepsearch
MCP URL2SNAP
A lightweight MCP server that captures screenshots of any URL and returns the image URL. Requires an AbstractAPI key.
AgentQL
Enable AI agents to get structured data from unstructured web with AgentQL.
Finance MCP Server
Stock prices, cryptocurrency data, exchange rates, and portfolio tracking