JinaAI Search
Efficient web search optimized for LLM-friendly content using the Jina AI API.
mcp-jinaai-search
⚠️ Notice
This repository is no longer maintained.
The functionality of this tool is now available in mcp-omnisearch, which combines multiple MCP tools in one unified package.
Please use mcp-omnisearch instead.
A Model Context Protocol (MCP) server for integrating Jina.ai's Search API with LLMs. This server provides efficient and comprehensive web search capabilities, optimised for retrieving clean, LLM-friendly content from the web.
Features
- 🔍 Advanced web search through Jina.ai Search API
- 🚀 Fast and efficient content retrieval
- 📄 Clean text extraction with preserved structure
- 🧠 Content optimised for LLMs
- 🌐 Support for various content types including documentation
- 🏗️ Built on the Model Context Protocol
- 🔄 Configurable caching for performance
- 🖼️ Optional image and link gathering
- 🌍 Localisation support through browser locale
- 🎯 Token budget control for response size
Configuration
This server requires configuration through your MCP client. Here are examples for different environments:
Cline Configuration
Add this to your Cline MCP settings:
{
"mcpServers": {
"jinaai-search": {
"command": "node",
"args": ["-y", "mcp-jinaai-search"],
"env": {
"JINAAI_API_KEY": "your-jinaai-api-key"
}
}
}
}
Claude Desktop with WSL Configuration
For WSL environments, add this to your Claude Desktop configuration:
{
"mcpServers": {
"jinaai-search": {
"command": "wsl.exe",
"args": [
"bash",
"-c",
"JINAAI_API_KEY=your-jinaai-api-key npx mcp-jinaai-search"
]
}
}
}
Environment Variables
The server requires the following environment variable:
JINAAI_API_KEY: Your Jina.ai API key (required)
API
The server implements a single MCP tool with configurable parameters:
search
Search the web and get clean, LLM-friendly content using Jina.ai Reader. Returns top 5 results with URLs and clean content.
Parameters:
query(string, required): Search queryformat(string, optional): Response format ("json" or "text"). Defaults to "text"no_cache(boolean, optional): Bypass cache for fresh results. Defaults to falsetoken_budget(number, optional): Maximum number of tokens for this requestbrowser_locale(string, optional): Browser locale for rendering contentstream(boolean, optional): Enable stream mode for large pages. Defaults to falsegather_links(boolean, optional): Gather all links at the end of response. Defaults to falsegather_images(boolean, optional): Gather all images at the end of response. Defaults to falseimage_caption(boolean, optional): Caption images in the content. Defaults to falseenable_iframe(boolean, optional): Extract content from iframes. Defaults to falseenable_shadow_dom(boolean, optional): Extract content from shadow DOM. Defaults to falseresolve_redirects(boolean, optional): Follow redirect chains to final URL. Defaults to true
Development
Setup
- Clone the repository
- Install dependencies:
pnpm install
- Build the project:
pnpm run build
- Run in development mode:
pnpm run dev
Publishing
- Create a changeset:
pnpm changeset
- Version the package:
pnpm version
- Build and publish:
pnpm release
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT License - see the LICENSE file for details.
Acknowledgments
- Built on the Model Context Protocol
- Powered by Jina.ai Search API
Servidores relacionados
MCP Gemini Grounded Search
A Go-based MCP server providing grounded search functionality using Google's Gemini API.
Search1API
One API for Search, Crawling, and Sitemaps
VelociRAG
Lightning-fast RAG for AI agents. 4-layer fusion (vector, BM25, graph, metadata), ONNX Runtime, sub-200ms search, no PyTorch.
FetchSERP
All-in-One SEO & Web Intelligence Toolkit API from FetchSERP.
Local Research MCP Server
A private, local research assistant that searches the web and scrapes content using DuckDuckGo.
Perplexity AI
An MCP server to interact with Perplexity AI's language models for search and conversational AI.
Meyhem
Agent-native search proxy with feedback-driven ranking. Results ranked by whether agents actually succeed with them.
Library Docs MCP Server
Search and fetch documentation for popular libraries like Langchain, Llama-Index, and OpenAI using the Serper API.
MCP Agent
A lightweight, local MCP server in Python that enables RAG search through AWS Lambda.
RAG Documentation
Retrieve and process documentation using vector search to provide context for AI assistants.