Scrapes Google search results using a headless browser. Requires Chrome to be installed.
A Model Context Protocol (MCP) server that provides web search functionality using a headless Chrome browser to scrape Google, DuckDuckGo and Bing search results.
search_web
Search the web using Google and return structured results.
Parameters:
query
(string): The search query stringmax_results
(int, optional): Maximum number of results to return (default: 10, max: 100)include_snippets
(bool, optional): Whether to include text snippets (default: true)Returns:
title
: Page titleurl
: Full URLdomain
: Domain namesnippet
: Text snippet (if enabled)rank
: Search result rankingget_webpage_content
Fetch and return the text content of a webpage.
Parameters:
url
(string): The URL of the webpage to fetchmax_length
(int, optional): Maximum content length (default: 5000, max: 20000)Returns:
url
: The requested URLtitle
: Page titlecontent
: Extracted text contentlength
: Content length in charactersInstall dependencies:
# Using uv (recommended)
uv sync
# Or using pip
pip install -e .
Install Chrome browser (required for Selenium):
brew install --cask google-chrome
sudo apt-get install google-chrome-stable
ChromeDriver will be automatically downloaded and managed by webdriver-manager.
# Run directly
python main.py
# Or using the installed script
web-search-mcp
The server will start and listen for MCP connections.
Add this configuration to your Claude Desktop MCP settings:
{
"mcpServers": {
"web-search": {
"command": "python",
"args": ["/path/to/your/web-search-mcp/main.py"]
}
}
}
Once connected, you can use the tools like this:
Search for "python web scraping tutorials" and show me the top 5 results.
Get the content from this webpage: https://example.com/article
The web searcher uses these Chrome options by default:
The tool includes comprehensive error handling for:
Errors are logged and graceful fallbacks are provided.
fastmcp
: MCP server frameworkselenium
: Web browser automationbeautifulsoup4
: HTML parsingwebdriver-manager
: Chrome driver managementrequests
: HTTP requestslxml
: XML/HTML parserTo modify or extend the functionality:
uv sync
or pip install -e .
python main.py
This project is licensed under MIT License. You can check it out at - LICENSE
Contributions are welcome! Please feel free to submit a Pull Request.
Search engine for AI agents (search + extract) powered by Tavily
Search the web using Kagi's search API
Check domain name registration status in bulk using WHOIS and DNS dual verification.
Web search and webpage scraping using the Serper API.
Interact with the arXiv.org paper database. Supports keyword search, paper lookups, author searches, and trend analysis.
An MCP server for web and local search using the Brave Search API.
SEO analysis using the Serpstat API.
Search the Powertools for AWS Lambda documentation across multiple runtimes to find documentation and examples.
Search YouTube videos and retrieve their transcripts using the YouTube API.
Search and analyze classical Japanese literature using the Genji API, with advanced normalization features.