firecrawl-crawloleh firecrawl
Bulk extract content from entire websites or site sections with depth and path filtering. Crawls pages following links up to configurable depth limits and page counts, with path inclusion/exclusion filters to scope extraction Supports async job polling or synchronous waiting with progress display via --wait and --progress flags Offers concurrency control, request delays, and JSON output formatting for integration into agent workflows Part of a four-step escalation pattern: search → scrape →...
npx skills add https://github.com/firecrawl/cli --skill firecrawl-crawlfirecrawl crawl
Bulk extract content from a website. Crawls pages following links up to a depth/limit.
When to use
- You need content from many pages on a site (e.g., all
/docs/) - You want to extract an entire site section
- Step 4 in the workflow escalation pattern: search → scrape → map → crawl → interact
Quick start
# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json
# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json
# Check status of a running crawl
firecrawl crawl <job-id>
Options
| Option | Description |
|---|---|
--wait | Wait for crawl to complete before returning |
--progress | Show progress while waiting |
--limit <n> | Max pages to crawl |
--max-depth <n> | Max link depth to follow |
--include-paths <paths> | Only crawl URLs matching these paths |
--exclude-paths <paths> | Skip URLs matching these paths |
--delay <ms> | Delay between requests |
--max-concurrency <n> | Max parallel crawl workers |
--pretty | Pretty print JSON output |
-o, --output <path> | Output file path |
Tips
- Always use
--waitwhen you need the results immediately. Without it, crawl returns a job ID for async polling. - Use
--include-pathsto scope the crawl — don't crawl an entire site when you only need one section. - Crawl consumes credits per page. Check
firecrawl credit-usagebefore large crawls.
See also
- firecrawl-scrape — scrape individual pages
- firecrawl-map — discover URLs before deciding to crawl
- firecrawl-download — download site to local files (uses map + scrape)
Lebih banyak skill dari firecrawl
oracle
by firecrawl
Best practices for using the oracle CLI (prompt + file bundling, engines, sessions, and file attachment patterns).
ordercli
by firecrawl
Foodora-only CLI for checking past orders and active order status (Deliveroo WIP).
peekaboo
by firecrawl
Capture and automate macOS UI with the Peekaboo CLI.
sag
by firecrawl
ElevenLabs text-to-speech with mac-style say UX.
session-logs
by firecrawl
Search and analyze your own session logs (older/parent conversations) using jq.
sherpa-onnx-tts
by firecrawl
Local text-to-speech via sherpa-onnx (offline, no cloud)
sentry
by firecrawl
sentry — an installable skill for AI agents, published by firecrawl/skills.
wordpress-router
by firecrawl
Use when the user asks about WordPress codebases (plugins, themes, block themes, Gutenberg blocks, WP core checkouts) and you need to quickly classify the repo…