WebSearch
An advanced web search and content extraction tool powered by the Firecrawl API for web scraping and analysis.
WebSearch - Advanced Web Search and Content Extraction Tool
A powerful web search and content extraction tool built with Python, leveraging the Firecrawl API for advanced web scraping, searching, and content analysis capabilities.
π Features
- Advanced Web Search: Perform intelligent web searches with customizable parameters
- Content Extraction: Extract specific information from web pages using natural language prompts
- Web Crawling: Crawl websites with configurable depth and limits
- Web Scraping: Scrape web pages with support for various output formats
- MCP Integration: Built as a Model Context Protocol (MCP) server for seamless integration
π Prerequisites
- Python 3.8 or higher
- uv package manager
- Firecrawl API key
- OpenAI API key (optional, for enhanced features)
- Tavily API key (optional, for additional search capabilities)
π οΈ Installation
- Install uv:
# On Windows (using pip)
pip install uv
# On Unix/MacOS
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add uv to PATH (Unix/MacOS)
export PATH="$HOME/.local/bin:$PATH"
# Add uv to PATH (Windows - add to Environment Variables)
# Add: %USERPROFILE%\.local\bin
- Clone the repository:
git clone https://github.com/yourusername/websearch.git
cd websearch
- Create and activate a virtual environment with uv:
# Create virtual environment
uv venv
# Activate on Windows
.\.venv\Scripts\activate.ps1
# Activate on Unix/MacOS
source .venv/bin/activate
- Install dependencies with uv:
# Install from requirements.txt
uv sync
- Set up environment variables:
# Create .env file
touch .env
# Add your API keys
FIRECRAWL_API_KEY=your_firecrawl_api_key
OPENAI_API_KEY=your_openai_api_key
π― Usage
Setting Up With Claude for Desktop
Instead of running the server directly, you can configure Claude for Desktop to access the WebSearch tools:
-
Locate or create your Claude for Desktop configuration file:
- Windows:
%env:AppData%\Claude\claude_desktop_config.json - macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
-
Add the WebSearch server configuration to the
mcpServerssection:
{
"mcpServers": {
"websearch": {
"command": "uv",
"args": [
"--directory",
"D:\\ABSOLUTE\\PATH\\TO\\WebSearch",
"run",
"main.py"
]
}
}
}
-
Make sure to replace the directory path with the absolute path to your WebSearch project folder.
-
Save the configuration file and restart Claude for Desktop.
-
Once configured, the WebSearch tools will appear in the tools menu (hammer icon) in Claude for Desktop.
Available Tools
-
Search
-
Extract Information
-
Crawl Websites
-
Scrape Content
π API Reference
Search
query(str): The search query- Returns: Search results in JSON format
Extract
urls(List[str]): List of URLs to extract information fromprompt(str): Instructions for extractionenableWebSearch(bool): Enable supplementary web searchshowSources(bool): Include source references- Returns: Extracted information in specified format
Crawl
url(str): Starting URLmaxDepth(int): Maximum crawl depthlimit(int): Maximum pages to crawl- Returns: Crawled content in markdown/HTML format
Scrape
url(str): Target URL- Returns: Scraped content with optional screenshots
π§ Configuration
Environment Variables
The tool requires certain API keys to function. We provide a .env.example file that you can use as a template:
- Copy the example file:
# On Unix/MacOS
cp .env.example .env
# On Windows
copy .env.example .env
- Edit the
.envfile with your API keys:
# OpenAI API key - Required for AI-powered features
OPENAI_API_KEY=your_openai_api_key_here
# Firecrawl API key - Required for web scraping and searching
FIRECRAWL_API_KEY=your_firecrawl_api_key_here
Getting the API Keys
-
OpenAI API Key:
- Visit OpenAI's platform
- Sign up or log in
- Navigate to API keys section
- Create a new secret key
-
Firecrawl API Key:
- Visit Firecrawl's website
- Create an account
- Navigate to your dashboard
- Generate a new API key
If everything is configured correctly, you should receive a JSON response with search results.
Troubleshooting
If you encounter errors:
- Ensure all required API keys are set in your
.envfile - Verify the API keys are valid and have not expired
- Check that the
.envfile is in the root directory of the project - Make sure the environment variables are being loaded correctly
π€ Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
π License
This project is licensed under the MIT License - see the LICENSE file for details.
π Acknowledgments
- Firecrawl for their powerful web scraping API
- OpenAI for AI capabilities
- MCPThe MCP community for the protocol specification
π¬ Contact
JosΓ© MartΓn Rodriguez Mortaloni - @m4s1t425 - jmrodriguezm13@gmail.com
Made with β€οΈ using Python and Firecrawl
Related Servers
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Mention MCP Server
Monitor web and social media using the Mention API.
Selenium MCP Server
Control web browsers using the Selenium WebDriver for automation and testing.
Crawl4AI
Web scraping skill for Claude AI. Crawl websites, extract structured data with CSS/LLM strategies, handle dynamic JavaScript content. Built on crawl4ai with complete SDK reference, example scripts, and tests.
yt-dlp
Download video and audio content from various websites like YouTube, Facebook, and Tiktok using yt-dlp.
Secure Fetch
Secure fetch to prevent access to local resources
Scrapfly
Scrapfly MCP Server gives AI agents a simple, unified way to scrape live web data with built-in anti-bot handling.
Amazon MCP Server
Scrapes and searches for products on Amazon.
YouTube MCP Server
Extract metadata and captions from YouTube videos and convert them to markdown.
Postman V2
An MCP server that provides access to Postman using V2 api version.
Bilibili
Interact with the Bilibili video website, enabling actions like searching for videos, retrieving video information, and accessing user data.