Web Scout
An MCP server for web search and content extraction using DuckDuckGo.
Web Scout MCP Server
An MCP server for web search using DuckDuckGo and content extraction, with support for multiple URLs and memory optimizations.
✨ Features
- 🔍 DuckDuckGo Search: Fast and privacy-focused web search capability
- 📄 Content Extraction: Clean, readable text extraction from web pages
- 🚀 Parallel Processing: Support for extracting content from multiple URLs simultaneously
- 💾 Memory Optimization: Smart memory management to prevent application crashes
- ⏱️ Rate Limiting: Intelligent request throttling to avoid API blocks
- 🛡️ Error Handling: Robust error handling for reliable operation
📦 Installation
Installing via Smithery
To install Web Scout for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @pinkpixel-dev/web-scout-mcp --client claude
Global Installation
npm install -g @pinkpixel/web-scout-mcp
Local Installation
npm install @pinkpixel/web-scout-mcp
🚀 Usage
Command Line
After installing globally, run:
web-scout-mcp
With MCP Clients
Add this to your MCP client's config.json (Claude Desktop, Cursor, etc.):
{
"mcpServers": {
"web-scout": {
"command": "npx",
"args": [
"-y",
"@pinkpixel/web-scout-mcp@latest"
]
}
}
}
Environment Variables
Set the WEB_SCOUT_DISABLE_AUTOSTART=1 environment variable when embedding the package and calling createServer() yourself. By default running the published entrypoint (for example node dist/index.js or npx @pinkpixel/web-scout-mcp) automatically bootstraps the stdio transport.
🧰 Tools
The server provides the following MCP tools:
🔍 DuckDuckGoWebSearch
Initiates a web search query using the DuckDuckGo search engine and returns a well-structured list of findings.
Input:
query(string): The search query stringmaxResults(number, optional): Maximum number of results to return (default: 10)
Example:
{
"query": "latest advancements in AI",
"maxResults": 5
}
Output: A formatted list of search results with titles, URLs, and snippets.
📄 UrlContentExtractor
Fetches and extracts clean, readable content from web pages by removing unnecessary elements like scripts, styles, and navigation.
Input:
url: Either a single URL string or an array of URL strings
Example (single URL):
{
"url": "https://example.com/article"
}
Example (multiple URLs):
{
"url": [
"https://example.com/article1",
"https://example.com/article2"
]
}
Output: Extracted text content from the specified URL(s).
🛠️ Development
# Clone the repository
git clone https://github.com/pinkpixel-dev/web-scout-mcp.git
cd web-scout-mcp
# Install dependencies
npm install
# Build
npm run build
# Run
npm start
📚 Documentation
For more detailed information about the project, check out these resources:
- OVERVIEW.md - Technical overview and architecture
- CONTRIBUTING.md - Guidelines for contributors
- CHANGELOG.md - Version history and changes
📋 Requirements
- Node.js >= 18.0.0
- npm or yarn
📄 License
This project is licensed under the Apache 2.0 License.
Made with ❤️ by Pink Pixel
✨ Dream it, Pixel it ✨
เซิร์ฟเวอร์ที่เกี่ยวข้อง
Bright Data
ผู้สนับสนุนDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Tap
AI forges browser automation as deterministic .tap.js programs. 140+ community skills, 3 runtimes. Programs run forever at $0.
BrowserCat
Automate remote browsers using the BrowserCat API.
MCP Deep Web Research Server
An advanced web research server with intelligent search queuing, enhanced content extraction, and deep research capabilities.
Bilibili Comments
Fetch Bilibili video comments in bulk, including nested replies. Requires a Bilibili cookie for authentication.
webcheck-mcp
Website health checker MCP server - SEO audit, accessibility scan, broken link detection, performance analysis, and page comparison.
CompanyScope MCP
Company intelligence in one tool call — funding, tech stack, employees, competitors, news from public APIs
Plasmate MCP
Agent-native headless browser that converts web pages to structured Semantic Object Model (SOM) JSON -- 4x fewer tokens than raw HTML with lower latency on Claude and GPT-4o.
Crawl4AI RAG
Integrate web crawling and Retrieval-Augmented Generation (RAG) into AI agents and coding assistants.
Daft.ie MCP Server
Search and retrieve rental property details from Daft.ie via web scraping.
Mozilla Readability Parser
Extracts and transforms webpage content into clean, LLM-optimized Markdown using Mozilla's Readability algorithm.