CrawlForge MCP
CrawlForge MCP is a production-ready MCP server with 18 web scraping tools for AI agents. It gives Claude, Cursor, and any MCP-compatible client the ability to fetch URLs, extract structured data with CSS/XPath selectors, run deep multi-step research, bypass anti-bot detection with TLS fingerprint randomization, process documents, monitor page changes, and more. Credit-based pricing with a free tier (1,000 credits/month, no credit card required).
CrawlForge MCP Server
Professional web scraping and content extraction server implementing the Model Context Protocol (MCP). Get started with 1,000 free credits - no credit card required!
🎯 Features
- 20 Professional Tools: Web scraping, deep research, stealth browsing, content analysis
- Free Tier: 1,000 credits to get started instantly
- MCP Compatible: Works with Claude, Cursor, and other MCP-enabled AI tools
- Enterprise Ready: Scale up with paid plans for production use
- Credit-Based: Pay only for what you use
🚀 Quick Start (2 Minutes)
1. Install from NPM
npm install -g crawlforge-mcp-server
2. Setup Your API Key
npx crawlforge-setup
This will:
- Guide you through getting your free API key
- Configure your credentials securely
- Auto-configure Claude Code and Cursor (if installed)
- Verify your setup is working
Don't have an API key? Get one free at https://www.crawlforge.dev/signup
3. Configure Your IDE (if not auto-configured)
🤖 For Claude Desktop
Add to claude_desktop_config.json:
{ "mcpServers": { "crawlforge": { "command": "npx", "args": ["crawlforge-mcp-server"] } } }
Location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%/Claude/claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Restart Claude Desktop to activate.
🖥️ For Claude Code CLI (Auto-configured)
The setup wizard automatically configures Claude Code by adding to ~/.claude.json:
{ "mcpServers": { "crawlforge": { "type": "stdio", "command": "crawlforge" } } }
After setup, restart Claude Code to activate.
💻 For Cursor IDE (Auto-configured)
The setup wizard automatically configures Cursor by adding to ~/.cursor/mcp.json:
Restart Cursor to activate.
📊 Available Tools
Basic Tools (1 credit each)
fetch_url- Fetch content from any URLextract_text- Extract clean text from web pagesextract_links- Get all links from a pageextract_metadata- Extract page metadata
Advanced Tools (2-3 credits)
scrape_structured- Extract structured data with CSS selectorssearch_web- Search the web using Google Search APIsummarize_content- Generate intelligent summariesanalyze_content- Comprehensive content analysisextract_structured- LLM-powered schema-driven extractiontrack_changes- Monitor content changes over time
Premium Tools (5-10 credits)
crawl_deep- Deep crawl entire websitesmap_site- Discover and map website structurebatch_scrape- Process multiple URLs simultaneouslydeep_research- Multi-stage research with source verificationstealth_mode- Anti-detection browser management
Heavy Processing (3-10 credits)
process_document- Multi-format document processingextract_content- Enhanced content extractionscrape_with_actions- Browser automation chainsgenerate_llms_txt- Generate AI interaction guidelineslocalization- Multi-language and geo-location management
💳 Pricing
| Plan | Credits/Month | Best For |
|---|---|---|
| Free | 1,000 | Testing & personal projects |
| Starter | 5,000 | Small projects & development |
| Professional | 50,000 | Professional use & production |
| Enterprise | 250,000 | Large scale operations |
All plans include:
- Access to all 20 tools
- Credits never expire and roll over month-to-month
- API access and webhook notifications
View full pricing
🔧 Advanced Configuration
Environment Variables
Optional: Set API key via environment
export CRAWLFORGE_API_KEY="cf_live_your_api_key_here"
Optional: Custom API endpoint (for enterprise)
export CRAWLFORGE_API_URL="https://api.crawlforge.dev"
Manual Configuration
Your configuration is stored at ~/.crawlforge/config.json:
{ "apiKey": "cf_live_...", "userId": "user_...", "email": "[email protected]" }
📖 Usage Examples
Once configured, use these tools in your AI assistant:
"Search for the latest AI news"
"Extract all links from example.com"
"Crawl the documentation site and summarize it"
"Monitor this page for changes"
"Extract product prices from this e-commerce site"
🔒 Security & Privacy
- Secure Authentication: API keys required for all operations (no bypass methods)
- Local Storage: API keys stored securely at
~/.crawlforge/config.json - HTTPS Only: All connections use encrypted HTTPS
- No Data Retention: We don't store scraped data, only usage logs
- Rate Limiting: Built-in protection against abuse
- Compliance: Respects robots.txt and GDPR requirements
Security Updates
v3.0.3 (2025-10-01): Removed authentication bypass vulnerability. All users must authenticate with valid API keys.
🆘 Support
- Documentation: https://www.crawlforge.dev/docs
- Issues: GitHub Issues
- Email: [email protected]
- Discord: Join our community
📄 License
MIT License - see LICENSE file for details.
🤝 Contributing
Contributions are welcome! Please read our Contributing Guide first.
Built with ❤️ by the CrawlForge team
Website | Documentation | API Reference
Serveurs connexes
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Oxylabs AI Studio
AI-powered tools for web scraping, crawling, and browser automation.
Browser MCP
A fast, lightweight MCP server that empowers LLMs with browser automation via Puppeteer’s structured accessibility data, featuring optional vision mode for complex visual understanding and flexible, cross-platform configuration.
Browser Use
Automate browser tasks using the Browser Use API.
Postman API V3
MCP server for Postman API v3
Query Table
A financial web table crawler using Playwright that queries data from multiple websites with fallback switching.
Puppeteer
A server for browser automation using Puppeteer, enabling web scraping, screenshots, and JavaScript execution.
Read URL MCP
Extracts web content from a URL and converts it to clean Markdown format.
Douyin MCP Server
Extract watermark-free video links and copy from Douyin.
HTML to Markdown MCP
Fetch web pages and convert HTML to clean, formatted Markdown. Handles large pages with automatic file saving to bypass token limits.
Web Scout
A server for web scraping, searching, and analysis using multiple engines and APIs.