Tavily
A comprehensive search API for real-time web search, data extraction, and crawling, requiring a Tavily API key.
Tavily MCP Server
Tavily MCP Server implementation that uses fastmcp and supports both sse and stdio transports. To use this server, you need a Tavily account and a Tavily API key, which must be loaded into the TAVILY_API_KEY environment variable.
The Tavily MCP server provides:
- search, extract, map, crawl tools
- Real-time web search capabilities through the tavily-search tool
- Intelligent data extraction from web pages via the tavily-extract tool
- Powerful web mapping tool that creates a structured map of website
- Web crawler that systematically explores websites
Prerequisites
- git installed. (To clone the repo)
- uv installed.
- docker installed (Optional: If you are planning to use the SSE server inside a docker container).
To install uv in Linux and MacOS type this in your terminal:
curl -LsSf https://astral.sh/uv/install.sh | sh
Environment Variables
Copy the .env.example file and rename that to .env. Then paste your TAVILY_API_KEY inside there
TAVILY_API_KEY=<YOUR-API-KEY>
Optional: You can also configure the port if you are planning to use SSE.
TAVILY_MCP_PORT=<PORT>
Running the SSE server
While inside the repo run:
uv run --env-file .env tavily-mcp-sse
Running on STDIO
{
"mcpServers": {
"tavily-mcp-server": {
"command": "uv",
"args": [
"run",
"--directory",
"<LOCATION-TO-THE-REPO>",
"tavily-mcp-stdio"
],
"env": {
"TAVILY_API_KEY": "<YOUR-API-KEY>"
}
}
}
}
Docker SSE Server
First you need to build the image using the Dockerfile inside this repository. Run this to build the image:
docker build -t tavily-mcp .
Then you can run the container using the environment variables inside the env file
docker run --name tavily-mcp \
-p 127.0.0.1:8000:8000 \
--env-file .env \
tavily-mcp
Or you can specify the environment variables yourself.
docker run --name tavily-mcp \
-p 127.0.0.1:8000:8000 \
-e TAVILY_API_KEY=<YOUR-API-KEY>
tavily-mcp
Похожие серверы
Google Search Engine
A server for Google search and webpage content extraction, built on Cloudflare Workers with OAuth support.
BrowseAI Dev
Evidence-backed web research for AI agents. BM25+NLI claim verification, confidence scores, citations, contradiction detection. 12 MCP tools. Works with Claude Desktop, Cursor, Windsurf. Python SDK (pip install browseaidev), LangChain, CrewAI, LlamaIndex integrations. npx browseai-dev
Perplexica Search
Perform conversational searches with the Perplexica AI-powered answer engine.
RAG Documentation MCP Server
Retrieve and process documentation using vector search to provide relevant context for AI assistants.
Google Search
Perform Google searches and view web content with advanced bot detection avoidance.
QuantConnect Docs
An MCP server for intelligent search and retrieval of QuantConnect PDF documentation.
Shodan
Query Shodan's database of internet-connected devices and vulnerabilities using the Shodan API.
Google Research
Perform advanced web research using Google Search, with intelligent content extraction and multi-source synthesis.
Dartpoint
Access public disclosure information for Korean companies (DART) using the dartpoint.ai API.
MCP Market Russia
Search 1000+ Russian construction companies and real estate agencies for AI agents