MCP Go Colly Crawler
A web crawling framework that integrates the Model Context Protocol (MCP) with the Colly web scraping library.
MCP Go Colly Crawler
Overview
MCP Go Colly is a sophisticated web crawling framework that integrates the Model Context Protocol (MCP) with the powerful Colly web scraping library. This project aims to provide a flexible and extensible solution for extracting web content for large language model (LLM) applications.
Features
- Concurrent web crawling with configurable depth and domain restrictions
- MCP server integration for tool-based crawling
- Graceful shutdown handling
- Robust error handling and result formatting
- Support for both single URL and batch URL crawling
Building from Source
Prerequisites
- Go 1.21 or later
- Make (for using Makefile commands)
Installation
- Clone the repository:
git clone https://github.com/yourusername/mcp-go-colly.git
cd mcp-go-colly
- Install dependencies:
make deps
Building
The project includes a Makefile with several useful commands:
# Build the binary (outputs to bin/mcp-go-colly)
make build
# Build for all platforms (Linux, Windows, macOS)
make build-all
# Run tests
make test
# Clean build artifacts
make clean
# Format code
make fmt
# Run linter
make lint
All binaries will be generated in the bin/ directory.
Then you need to add the following configuration to the claude_desktop_config.json file:
{
"mcpServers": {
"web-scraper": {
"command": "<add path here>/mcp-go-colly/bin/mcp-go-colly"
}
}
}
Usage
As an MCP Tool
The crawler is implemented as an MCP tool that can be called with the following parameters:
{
"urls": ["https://example.com"], // Single URL or array of URLs
"max_depth": 2 // Optional: Maximum crawl depth (default: 2)
}
Example MCP Tool Call
result, err := crawlerTool.Call(ctx, mcp.CallToolRequest{
Params: struct{ Arguments map[string]interface{} }{
Arguments: map[string]interface{}{
"urls": []string{"https://example.com"},
"max_depth": 2,
},
},
})
Configuration Options
max_depth: Set maximum crawl depth (default: 2)urls: Single URL string or array of URLs to crawl- Domain restrictions are automatically applied based on the provided URLs
Contributing
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
License
MIT
Acknowledgments
- Colly Web Scraping Framework
- Mark3 Labs MCP Project
関連サーバー
Bright Data
スポンサーDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Playwright Server
A server providing Playwright tools for browser automation and web scraping.
DidItChange
MCP-native website monitoring with AI-powered change summaries and natural conversation control.
medical-mcp
About An MCP server that provides comprehensive medical information by querying multiple authoritative medical APIs including FDA, WHO, PubMed, Google Scholar, and RxNorm.
Canvas LMS
Links AI tools to Canvas school dashboards.
Google-DeepSearch-AI-Mode
https://github.com/mottysisam/deepsearch
LinkedIn
Scrape LinkedIn profiles, companies, and jobs using direct URLs. Features Claude AI integration and secure credential storage.
freesound-mcp
A Model Context Protocol (MCP) server that enables AI applications to search and download audio resources from the Freesound platform via natural language commands.
iReader MCP
Tools for reading and extracting content from the internet.
RedNote MCP
Access and interact with content from Xiaohongshu (RedNote).
deadlink-checker-mcp
Dead link checker MCP server - find broken links, redirects, and timeouts on any website.