MCP Go Colly Crawler
A web crawling framework that integrates the Model Context Protocol (MCP) with the Colly web scraping library.
MCP Go Colly Crawler
Overview
MCP Go Colly is a sophisticated web crawling framework that integrates the Model Context Protocol (MCP) with the powerful Colly web scraping library. This project aims to provide a flexible and extensible solution for extracting web content for large language model (LLM) applications.
Features
- Concurrent web crawling with configurable depth and domain restrictions
- MCP server integration for tool-based crawling
- Graceful shutdown handling
- Robust error handling and result formatting
- Support for both single URL and batch URL crawling
Building from Source
Prerequisites
- Go 1.21 or later
- Make (for using Makefile commands)
Installation
- Clone the repository:
git clone https://github.com/yourusername/mcp-go-colly.git
cd mcp-go-colly
- Install dependencies:
make deps
Building
The project includes a Makefile with several useful commands:
# Build the binary (outputs to bin/mcp-go-colly)
make build
# Build for all platforms (Linux, Windows, macOS)
make build-all
# Run tests
make test
# Clean build artifacts
make clean
# Format code
make fmt
# Run linter
make lint
All binaries will be generated in the bin/ directory.
Then you need to add the following configuration to the claude_desktop_config.json file:
{
"mcpServers": {
"web-scraper": {
"command": "<add path here>/mcp-go-colly/bin/mcp-go-colly"
}
}
}
Usage
As an MCP Tool
The crawler is implemented as an MCP tool that can be called with the following parameters:
{
"urls": ["https://example.com"], // Single URL or array of URLs
"max_depth": 2 // Optional: Maximum crawl depth (default: 2)
}
Example MCP Tool Call
result, err := crawlerTool.Call(ctx, mcp.CallToolRequest{
Params: struct{ Arguments map[string]interface{} }{
Arguments: map[string]interface{}{
"urls": []string{"https://example.com"},
"max_depth": 2,
},
},
})
Configuration Options
max_depth: Set maximum crawl depth (default: 2)urls: Single URL string or array of URLs to crawl- Domain restrictions are automatically applied based on the provided URLs
Contributing
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
License
MIT
Acknowledgments
- Colly Web Scraping Framework
- Mark3 Labs MCP Project
Related Servers
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Jina Reader
Fetch the content of a remote URL as Markdown with Jina Reader.
Playwright MCP Server
An MCP server using Playwright for browser automation and webscrapping
WebSearch
A web search and content extraction tool using the Firecrawl API for advanced web scraping, searching, and content analysis.
Social APIS Hub
The unified API for social media data - built for developers and AI agents.
Olostep MCP Server
A server for web scraping, Google searches, and website URL lookups using the Olostep API.
YouTube
Fetch YouTube subtitles
Apify
Extract data from any website with thousands of scrapers, crawlers, and automations
Tech Collector MCP
Collects and summarizes technical articles from sources like Qiita, Dev.to, NewsAPI, and Hacker News using the Gemini API.
WebDriverIO
Automate web browsers using WebDriverIO. Supports actions like clicking, filling forms, and taking screenshots.
HTML to Markdown MCP
Fetch web pages and convert HTML to clean, formatted Markdown. Handles large pages with automatic file saving to bypass token limits.