MCP Go Colly Crawler
A web crawling framework that integrates the Model Context Protocol (MCP) with the Colly web scraping library.
MCP Go Colly Crawler
Overview
MCP Go Colly is a sophisticated web crawling framework that integrates the Model Context Protocol (MCP) with the powerful Colly web scraping library. This project aims to provide a flexible and extensible solution for extracting web content for large language model (LLM) applications.
Features
- Concurrent web crawling with configurable depth and domain restrictions
- MCP server integration for tool-based crawling
- Graceful shutdown handling
- Robust error handling and result formatting
- Support for both single URL and batch URL crawling
Building from Source
Prerequisites
- Go 1.21 or later
- Make (for using Makefile commands)
Installation
- Clone the repository:
git clone https://github.com/yourusername/mcp-go-colly.git
cd mcp-go-colly
- Install dependencies:
make deps
Building
The project includes a Makefile with several useful commands:
# Build the binary (outputs to bin/mcp-go-colly)
make build
# Build for all platforms (Linux, Windows, macOS)
make build-all
# Run tests
make test
# Clean build artifacts
make clean
# Format code
make fmt
# Run linter
make lint
All binaries will be generated in the bin/ directory.
Then you need to add the following configuration to the claude_desktop_config.json file:
{
"mcpServers": {
"web-scraper": {
"command": "<add path here>/mcp-go-colly/bin/mcp-go-colly"
}
}
}
Usage
As an MCP Tool
The crawler is implemented as an MCP tool that can be called with the following parameters:
{
"urls": ["https://example.com"], // Single URL or array of URLs
"max_depth": 2 // Optional: Maximum crawl depth (default: 2)
}
Example MCP Tool Call
result, err := crawlerTool.Call(ctx, mcp.CallToolRequest{
Params: struct{ Arguments map[string]interface{} }{
Arguments: map[string]interface{}{
"urls": []string{"https://example.com"},
"max_depth": 2,
},
},
})
Configuration Options
max_depth: Set maximum crawl depth (default: 2)urls: Single URL string or array of URLs to crawl- Domain restrictions are automatically applied based on the provided URLs
Contributing
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
License
MIT
Acknowledgments
- Colly Web Scraping Framework
- Mark3 Labs MCP Project
Server Terkait
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
MCP Chrome Server
A server for browser automation using Google Chrome, based on the MCP framework.
ShopGraph
Structured product data from the open web — where platform APIs don't reach. Schema.org + AI extraction. Pay per call via Stripe MPP.
Financial Data MCP Server
Provides real-time financial market data from Yahoo Finance.
1001Proxy - Proxy MCP Server for AI Agents
Use Claude, OpenAI Cursor, and any MCP-compatible AI agent to buy and manage proxies using natural language. No custom integrations needed - simply connect your client to the server and start chatting.
Redbook Search & Comment Tool
An automated tool to search notes, analyze content, and post AI-generated comments on Xiaohongshu (Redbook) using Playwright.
Crypto News MCP Server
Fetches the latest cryptocurrency news and converts article content from HTML to Markdown.
Humanizer PRO
Humanizer PRO turn AI content into Human written content undetectable and bypass all AI detectors.
Playwright MCP
Control a browser for automation and web scraping tasks using Playwright.
WebDriverIO
Automate web browsers using WebDriverIO. Supports actions like clicking, filling forms, and taking screenshots.
ScraperCity
B2B lead generation MCP server - Apollo, Google Maps, email finder, skip trace, and 15+ more tools.