Docs Fetch MCP Server
Fetch web page content with recursive exploration.
Docs Fetch MCP Server
A Model Context Protocol (MCP) server for fetching web content with recursive exploration capabilities. This server enables LLMs to autonomously explore web pages and documentation to learn about specific topics.
Overview
The Docs Fetch MCP Server provides a simple but powerful way for LLMs to retrieve and explore web content. It enables:
- Fetching clean, readable content from any web page
- Recursive exploration of linked pages up to a specified depth
- Same-domain link traversal to gather comprehensive information
- Smart filtering of navigation links to focus on content-rich pages
This tool is particularly useful when users want an LLM to learn about a specific topic by exploring documentation or web content.
Features
- Content Extraction: Cleanly extracts the main content from web pages, removing distractions like navigation, ads, and irrelevant elements
- Link Analysis: Identifies and extracts links from the page, assessing their relevance
- Recursive Exploration: Follows links to related content within the same domain, up to a specified depth
- Parallel Processing: Efficiently crawls content with concurrent requests and proper error handling
- Robust Error Handling: Gracefully handles network issues, timeouts, and malformed pages
- Dual-Strategy Approach: Uses fast axios requests first with puppeteer as a fallback for more complex pages
- Timeout Prevention: Implements global timeout handling to ensure reliable operation within MCP time limits
- Partial Results: Returns available content even when some pages fail to load completely
Usage
The server exposes a single MCP tool:
fetch_doc_content
Fetches web page content with the ability to explore linked pages up to a specified depth.
Parameters:
url(string, required): URL of the web page to fetchdepth(number, optional, default: 1): Maximum depth of directory/link exploration (1-5)
Returns:
{
"rootUrl": "https://example.com/docs",
"explorationDepth": 2,
"pagesExplored": 5,
"content": [
{
"url": "https://example.com/docs",
"title": "Documentation",
"content": "Main page content...",
"links": [
{
"url": "https://example.com/docs/topic1",
"text": "Topic 1"
},
...
]
},
...
]
}
Installation
- Clone this repository:
git clone https://github.com/wolfyy970/docs-fetch-mcp.git
cd docs-fetch-mcp
- Install dependencies:
npm install
- Build the project:
npm run build
- Configure your MCP settings in your Claude Client:
{
"mcpServers": {
"docs-fetch": {
"command": "node",
"args": [
"/path/to/docs-fetch-mcp/build/index.js"
],
"env": {
"MCP_TRANSPORT": "pipe"
}
}
}
}
Dependencies
@modelcontextprotocol/sdk: MCP server SDKpuppeteer: Headless browser for web page interactionaxios: HTTP client for making requests
Development
To run the server in development mode:
npm run dev
License
MIT
เซิร์ฟเวอร์ที่เกี่ยวข้อง
Bright Data
ผู้สนับสนุนDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Playwright
Playwright MCP server
MCP Web Scraper
A production-ready web scraping platform with ML-powered automation, browser automation via Playwright, and persistent caching.
Bilibili
Interact with the Bilibili video website, enabling actions like searching for videos, retrieving video information, and accessing user data.
Google-DeepSearch-AI-Mode
https://github.com/mottysisam/deepsearch
Tech Collector MCP
Collects and summarizes technical articles from sources like Qiita, Dev.to, NewsAPI, and Hacker News using the Gemini API.
Conduit
Headless browser with SHA-256 hash-chained audit trails and Ed25519-signed proof bundles. MCP server for AI agents.
Claimify
Extracts factual claims from text using the Claimify methodology. Requires an OpenAI API key.
TradingView Chart Image Scraper
Fetches TradingView chart images for a given ticker and interval.
Skyvern
AI-powered browser automation MCP server — navigate sites, fill forms, extract data, and handle logins via Claude Code CLI
Puppeteer
Browser automation using Puppeteer, with support for local, Docker, and Cloudflare Workers deployments.