Web Browser MCP Server
Provides advanced web browsing capabilities for AI applications.
โจ Features
๐ Enable AI assistants to browse and extract content from the web through a simple MCP interface.
The Web Browser MCP Server provides AI models with the ability to browse websites, extract content, and understand web pages through the Message Control Protocol (MCP). It enables smart content extraction with CSS selectors and robust error handling.
๐ค Contribute โข ๐ Report Bug
โจ Core Features
- ๐ฏ Smart Content Extraction: Target exactly what you need with CSS selectors
- โก Lightning Fast: Built with async processing for optimal performance
- ๐ Rich Metadata: Capture titles, links, and structured content
- ๐ก๏ธ Robust & Reliable: Built-in error handling and timeout management
- ๐ Cross-Platform: Works everywhere Python runs
๐ Quick Start
Installing via Smithery
To install Web Browser Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install web-browser-mcp-server --client claude
Installing Manually
Install using uv:
uv tool install web-browser-mcp-server
For development:
# Clone and set up development environment
git clone https://github.com/blazickjp/web-browser-mcp-server.git
cd web-browser-mcp-server
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install with test dependencies
uv pip install -e ".[test]"
๐ MCP Integration
Add this configuration to your MCP client config file:
{
"mcpServers": {
"web-browser-mcp-server": {
"command": "uv",
"args": [
"tool",
"run",
"web-browser-mcp-server"
],
"env": {
"REQUEST_TIMEOUT": "30"
}
}
}
}
For Development:
{
"mcpServers": {
"web-browser-mcp-server": {
"command": "uv",
"args": [
"--directory",
"path/to/cloned/web-browser-mcp-server",
"run",
"web-browser-mcp-server"
],
"env": {
"REQUEST_TIMEOUT": "30"
}
}
}
}
๐ก Available Tools
The server provides a powerful web browsing tool:
browse_webpage
Browse and extract content from web pages with optional CSS selectors:
# Basic webpage fetch
result = await call_tool("browse_webpage", {
"url": "https://example.com"
})
# Target specific content with CSS selectors
result = await call_tool("browse_webpage", {
"url": "https://example.com",
"selectors": {
"headlines": "h1, h2",
"main_content": "article.content",
"navigation": "nav a"
}
})
โ๏ธ Configuration
Configure through environment variables:
| Variable | Purpose | Default |
|---|---|---|
REQUEST_TIMEOUT | Webpage request timeout in seconds | 30 |
๐งช Testing
Run the test suite:
python -m pytest
๐ License
Released under the MIT License. See the LICENSE file for details.
Made with โค๏ธ by the Pear Labs Team
Related Servers
Mozilla Readability Parser
Extracts and transforms webpage content into clean, LLM-optimized Markdown using Mozilla's Readability algorithm.
B2Proxy
1GB Free Trial, World's Leading Proxy Service Platform, Efficient Data Collection
Web Scraper Service
A Python-based MCP server for headless web scraping. It extracts the main text content from web pages and outputs it as Markdown, text, or HTML.
Fetch
Fetch web content as HTML, JSON, plain text, or Markdown.
JinaAI Reader
Extracts web content using the Jina.ai Reader API.
Decodo
Easy web data access. Simplified retrieval of information from websites and online sources.
Context Scraper MCP Server
A server for web crawling and content extraction using the Crawl4AI library.
Steel Puppeteer
Provides browser automation capabilities using Puppeteer and Steel, configurable for local or cloud instances.
Oxylabs
Scrape websites with Oxylabs Web API, supporting dynamic rendering and parsing for structured data extraction.
Firecrawl
Extract web data with Firecrawl