tavily-crawlpor tavily-ai

Multi-page website crawler with semantic filtering and markdown export. Crawl entire site sections with depth and breadth control; filter by path regex, domain, or natural language instructions to focus results Save each page as local markdown files via --output-dir , or return structured JSON for agentic processing Use semantic instructions with chunk extraction to prevent context bloat when feeding results to LLMs; use full-page extraction for offline documentation downloads Supports...

npx skills add https://github.com/tavily-ai/skills --skill tavily-crawl

tavily crawl

Crawl a website and extract content from multiple pages. Supports saving each page as a local markdown file.

Before running any command

If tvly is not found on PATH, install it first:

curl -fsSL https://cli.tavily.com/install.sh | bash && tvly login

Do not skip this step or fall back to other tools.

See tavily-cli for alternative install methods and auth options.

When to use

  • You need content from many pages on a site (e.g., all /docs/)
  • You want to download documentation for offline use
  • Step 4 in the workflow: search → extract → map → crawl → research

Quick start

# Basic crawl
tvly crawl "https://docs.example.com" --json

# Save each page as a markdown file
tvly crawl "https://docs.example.com" --output-dir ./docs/

# Deeper crawl with limits
tvly crawl "https://docs.example.com" --max-depth 2 --limit 50 --json

# Filter to specific paths
tvly crawl "https://example.com" --select-paths "/api/.*,/guides/.*" --exclude-paths "/blog/.*" --json

# Semantic focus (returns relevant chunks, not full pages)
tvly crawl "https://docs.example.com" --instructions "Find authentication docs" --chunks-per-source 3 --json

Options

OptionDescription
--max-depthLevels deep (1-5, default: 1)
--max-breadthLinks per page (default: 20)
--limitTotal pages cap (default: 50)
--instructionsNatural language guidance for semantic focus
--chunks-per-sourceChunks per page (1-5, requires --instructions)
--extract-depthbasic (default) or advanced
--formatmarkdown (default) or text
--select-pathsComma-separated regex patterns to include
--exclude-pathsComma-separated regex patterns to exclude
--select-domainsComma-separated regex for domains to include
--exclude-domainsComma-separated regex for domains to exclude
--allow-external / --no-externalInclude external links (default: allow)
--include-imagesInclude images
--timeoutMax wait (10-150 seconds)
-o, --outputSave JSON output to file
--output-dirSave each page as a .md file in directory
--jsonStructured JSON output

Crawl for context vs. data collection

For agentic use (feeding results to an LLM):

Always use --instructions + --chunks-per-source. Returns only relevant chunks instead of full pages — prevents context explosion.

tvly crawl "https://docs.example.com" --instructions "API authentication" --chunks-per-source 3 --json

For data collection (saving to files):

Use --output-dir without --chunks-per-source to get full pages as markdown files.

tvly crawl "https://docs.example.com" --max-depth 2 --output-dir ./docs/

Tips

  • Start conservative--max-depth 1, --limit 20 — and scale up.
  • Use --select-paths to focus on the section you need.
  • Use map first to understand site structure before a full crawl.
  • Always set --limit to prevent runaway crawls.

See also

Más skills de tavily-ai

crawl
by tavily-ai
Extract and save website content as markdown files for offline access and analysis. Supports configurable crawl depth (1-5 levels), breadth limits, and page caps to balance coverage against performance Includes path filtering via regex patterns to focus on specific sections and exclude irrelevant content Offers two modes: full-page extraction for data collection, or semantic chunking with natural language instructions for feeding results into LLM context Provides a companion Map API for URL...
extract
by tavily-ai
Extract clean content from specific URLs using Tavily's extraction API. Supports up to 20 URLs per request with optional query-based reranking to focus on relevant content chunks Two extraction modes: basic for fast text extraction, advanced for JavaScript-rendered pages and structured data Automatic OAuth authentication via browser on first run, or manual API key configuration in settings Returns markdown or plain text format with optional image URLs and configurable timeout up to 60 seconds
research
by tavily-ai
Comprehensive research on any topic with automatic source gathering, analysis, and citations. Conducts multi-source web research with explicit citations, ideal for comparisons, current events, market analysis, and detailed reports Offers three model options: mini for targeted single-topic research (~30s), pro for comprehensive multi-angle analysis (~60-120s), and auto for API-driven complexity detection Authenticates via OAuth through Tavily MCP server with automatic browser-based login on...
search
by tavily-ai
Web search with LLM-optimized results, relevance scoring, and flexible filtering. Supports four search depth modes (ultra-fast, fast, basic, advanced) with configurable latency and relevance tradeoffs Includes domain filtering, time range constraints, date ranges, country boosting, and raw content extraction Returns results with title, URL, content snippet, and relevance score; optional image results and favicons Automatic OAuth authentication via Tavily MCP server or API key configuration;...
tavily-best-practices
by tavily-ai
Web search API for LLMs with real-time data access, content extraction, site crawling, and AI-powered research. Five core methods: search() for web results, extract() for URL content, crawl() for site-wide extraction, map() for URL discovery, and research() for end-to-end AI synthesis Supports Python and JavaScript SDKs with async clients for parallel queries and configurable search depth (ultra-fast/fast/basic/advanced) Crawl method accepts semantic instructions to focus extraction on...
tavily-cli
by tavily-ai
Web search, content extraction, site crawling, and deep research via Tavily CLI. Five command modes covering search, extraction, URL discovery, bulk crawling, and multi-source research with citations All commands support JSON output and file saving for structured, agentic workflows Escalation pattern guides you from simple search through extraction, mapping, crawling, to comprehensive research based on your needs Requires tavily-cli installation and API key authentication via tvly login
tavily-dynamic-search
by tavily-ai
Search the web, filter results, and extract content so that raw search data never enters your context window . Only your curated print() output comes back.
tavily-extract
by tavily-ai
Extract clean markdown or text from up to 20 URLs, with JavaScript rendering and query-focused chunking support. Handles JavaScript-rendered pages with configurable extraction depth (basic for simple pages, advanced for dynamic SPAs and tables) Supports query-focused extraction to return only relevant content chunks instead of full pages Returns LLM-optimized markdown by default, with options for plain text format and structured JSON output Processes up to 20 URLs in a single call;...

NotebookLM Web Importer

Importa páginas web y videos de YouTube a NotebookLM con un clic. Utilizado por más de 200,000 usuarios.

Instalar extensión de Chrome