firecrawl-crawlbởi firecrawl

Bulk extract content from a website. Crawls pages following links up to a depth/limit.

npx skills add https://github.com/firecrawl/firecrawl-cli --skill firecrawl-crawl

firecrawl crawl

Bulk extract content from a website. Crawls pages following links up to a depth/limit.

When to use

  • You need content from many pages on a site (e.g., all /docs/)
  • You want to extract an entire site section
  • Step 4 in the workflow escalation pattern: search → scrape → map → crawl → interact

Quick start

# Crawl a docs section
firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json

# Full crawl with depth limit
firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json

# Check status of a running crawl
firecrawl crawl <job-id>

Options

OptionDescription
--waitWait for crawl to complete before returning
--progressShow progress while waiting
--limit <n>Max pages to crawl
--max-depth <n>Max link depth to follow
--include-paths <paths>Only crawl URLs matching these paths
--exclude-paths <paths>Skip URLs matching these paths
--delay <ms>Delay between requests
--max-concurrency <n>Max parallel crawl workers
--prettyPretty print JSON output
-o, --output <path>Output file path

Tips

  • Always use --wait when you need the results immediately. Without it, crawl returns a job ID for async polling.
  • Use --include-paths to scope the crawl — don't crawl an entire site when you only need one section.
  • Crawl consumes credits per page. Check firecrawl credit-usage before large crawls.

See also

NotebookLM Web Importer

Nhập trang web và video YouTube vào NotebookLM chỉ với một cú nhấp. Được tin dùng bởi hơn 200.000 người dùng.

Cài đặt tiện ích Chrome