Open Crawler MCP Server
A web crawler and content extractor that supports multiple output formats like text, markdown, and JSON.
Open Crawler MCP Server
A Model Context Protocol (MCP) server for web crawling and content extraction from web pages with multiple output formats.
Features
- Multiple Output Formats: Extract content as text, markdown, structured XML, or JSON
- Smart Content Extraction: CSS selector support for targeted content extraction
- Robots.txt Compliance: Automatic robots.txt checking and compliance
- Rate Limiting: Built-in rate limiting (1 second minimum between requests)
- Size Protection: Maximum page size limit (10MB) to prevent memory issues
- Structured Content: Extract headings, paragraphs, links, images, and lists separately
- Error Handling: Comprehensive error codes for different failure scenarios
MCP Client Configuration
Add this server to your MCP client configuration:
{
"mcpServers": {
"open-crawler": {
"command": "npx",
"args": ["@elchika-inc/open-crawler-mcp-server"]
}
}
}
Available Tools
crawl_page
Extracts content from a web page in multiple formats with automatic robots.txt compliance checking.
Parameters:
url
(required): Target URL to crawlselector
(optional): CSS selector for specific content extractionformat
(optional): Output format -text
,markdown
,xml
, orjson
(default:text
)text_only
(optional): Legacy parameter for text-only extraction (deprecated, useformat
instead)
Output Formats:
text
: Clean, plain text content with whitespace normalizedmarkdown
: Well-formatted Markdown with headings, links, images, and lists preservedxml
: Structured XML with separate sections for headings, paragraphs, links, images, and listsjson
: Structured JSON object containing categorized content elements
Examples:
Basic text extraction:
{
"name": "crawl_page",
"arguments": {
"url": "https://example.com",
"format": "text"
}
}
Markdown extraction with CSS selector:
{
"name": "crawl_page",
"arguments": {
"url": "https://example.com",
"selector": "article",
"format": "markdown"
}
}
Structured JSON extraction:
{
"name": "crawl_page",
"arguments": {
"url": "https://example.com",
"format": "json"
}
}
check_robots
Validates if a URL is allowed to be crawled according to the site's robots.txt file.
Parameters:
url
(required): URL to check for crawling permission
Example:
{
"name": "check_robots",
"arguments": {
"url": "https://example.com/page"
}
}
Error Handling
Common error scenarios:
- Network connection issues
- Invalid HTML or missing content
- Robots.txt restrictions
- Request timeouts or rate limits
- Content size too large (>10MB)
License
MIT
Related Servers
Cloudflare Browser Rendering
Provides web context to LLMs using Cloudflare's Browser Rendering API.
WebScraping.AI
Interact with WebScraping.AI for web data extraction and scraping.
Steel Puppeteer
Provides browser automation capabilities using Puppeteer and Steel, configurable for local or cloud instances.
MCP Webscan Server
Fetch, analyze, and extract information from web pages.
Chrome MCP Server
Exposes Chrome browser functionality to AI assistants for automation, content analysis, and semantic search via a Chrome extension.
Playwright
Provides browser automation capabilities using Playwright. Interact with web pages, take screenshots, and execute JavaScript in a real browser environment.
Scrapeless
Integrate real-time Scrapeless Google SERP(Google Search, Google Flight, Google Map, Google Jobs....) results into your LLM applications. This server enables dynamic context retrieval for AI workflows, chatbots, and research tools.
Selenium MCP Server
Control web browsers using the Selenium WebDriver for automation and testing.
Career Site Jobs
A MCP server to retrieve up-to-date jobs from company career sites.
BrowserCat MCP Server
Remote browser automation using the BrowserCat API.