Firecrawl
Scrape, crawl, and extract data from any website using the Firecrawl API.
Firecrawl MCP Server
A Model Context Protocol (MCP) server for web scraping, content searching, site crawling, and data extraction using the Firecrawl API.
Features
-
Web Scraping: Extract content from any webpage with customizable options
- Mobile device emulation
- Ad and popup blocking
- Content filtering
- Structured data extraction
- Multiple output formats
-
Content Search: Intelligent search capabilities
- Multi-language support
- Location-based results
- Customizable result limits
- Structured output formats
-
Site Crawling: Advanced web crawling functionality
- Depth control
- Path filtering
- Rate limiting
- Progress tracking
- Sitemap integration
-
Site Mapping: Generate site structure maps
- Subdomain support
- Search filtering
- Link analysis
- Visual hierarchy
-
Data Extraction: Extract structured data from multiple URLs
- Schema validation
- Batch processing
- Web search enrichment
- Custom extraction prompts
Installation
# Global installation
npm install -g @modelcontextprotocol/mcp-server-firecrawl
# Local project installation
npm install @modelcontextprotocol/mcp-server-firecrawl
Quick Start
-
Get your Firecrawl API key from the developer portal
-
Set your API key:
Unix/Linux/macOS (bash/zsh):
export FIRECRAWL_API_KEY=your-api-keyWindows (Command Prompt):
set FIRECRAWL_API_KEY=your-api-keyWindows (PowerShell):
$env:FIRECRAWL_API_KEY = "your-api-key"Alternative: Using .env file (recommended for development):
# Install dotenv npm install dotenv # Create .env file echo "FIRECRAWL_API_KEY=your-api-key" > .envThen in your code:
import dotenv from 'dotenv'; dotenv.config(); -
Run the server:
mcp-server-firecrawl
Integration
Claude Desktop App
Add to your MCP settings:
{
"firecrawl": {
"command": "mcp-server-firecrawl",
"env": {
"FIRECRAWL_API_KEY": "your-api-key"
}
}
}
Claude VSCode Extension
Add to your MCP configuration:
{
"mcpServers": {
"firecrawl": {
"command": "mcp-server-firecrawl",
"env": {
"FIRECRAWL_API_KEY": "your-api-key"
}
}
}
}
Usage Examples
Web Scraping
// Basic scraping
{
name: "scrape_url",
arguments: {
url: "https://example.com",
formats: ["markdown"],
onlyMainContent: true
}
}
// Advanced extraction
{
name: "scrape_url",
arguments: {
url: "https://example.com/blog",
jsonOptions: {
prompt: "Extract article content",
schema: {
title: "string",
content: "string"
}
},
mobile: true,
blockAds: true
}
}
Site Crawling
// Basic crawling
{
name: "crawl",
arguments: {
url: "https://example.com",
maxDepth: 2,
limit: 100
}
}
// Advanced crawling
{
name: "crawl",
arguments: {
url: "https://example.com",
maxDepth: 3,
includePaths: ["/blog", "/products"],
excludePaths: ["/admin"],
ignoreQueryParameters: true
}
}
Site Mapping
// Generate site map
{
name: "map",
arguments: {
url: "https://example.com",
includeSubdomains: true,
limit: 1000
}
}
Data Extraction
// Extract structured data
{
name: "extract",
arguments: {
urls: ["https://example.com/product1", "https://example.com/product2"],
prompt: "Extract product details",
schema: {
name: "string",
price: "number",
description: "string"
}
}
}
Configuration
See configuration guide for detailed setup options.
API Documentation
See API documentation for detailed endpoint specifications.
Development
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Start in development mode
npm run dev
Examples
Check the examples directory for more usage examples:
- Basic scraping: scrape.ts
- Crawling and mapping: crawl-and-map.ts
Error Handling
The server implements robust error handling:
- Rate limiting with exponential backoff
- Automatic retries
- Detailed error messages
- Debug logging
Security
- API key protection
- Request validation
- Domain allowlisting
- Rate limiting
- Safe error messages
Contributing
See CONTRIBUTING.md for contribution guidelines.
License
MIT License - see LICENSE for details.
Related Servers
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Chrome MCP Server
Exposes Chrome browser functionality to AI assistants for automation, content analysis, and semantic search via a Chrome extension.
Apify
Extract data from any website with thousands of scrapers, crawlers, and automations
Bilibili
Interact with the Bilibili video website, enabling actions like searching for videos, retrieving video information, and accessing user data.
Scraper.is MCP
A powerful web scraping tool for AI assistants, powered by the Scraper.is API.
Context Scraper MCP Server
A server for web crawling and content extraction using the Crawl4AI library.
WebforAI Text Extractor
Extracts plain text from web pages using WebforAI.
Crawl4AI MCP Server
An MCP server for advanced web crawling, content extraction, and AI-powered analysis using the crawl4ai library.
DeepResearch MCP
A powerful research assistant for conducting iterative web searches, analysis, and report generation.
Parallel Task MCP
Perform Deep Research and Batch Tasks
Hacker News
Fetches and parses stories from Hacker News, providing structured data for top, new, ask, show, and job posts.