Driflyte
The Driflyte MCP Server exposes tools that allow AI assistants to query and retrieve topic-specific knowledge from recursively crawled and indexed web pages.
Driflyte MCP Server
MCP Server for Driflyte.
The Driflyte MCP Server exposes tools that allow AI assistants to query and retrieve topic-specific knowledge from recursively crawled and indexed web pages. With this MCP server, Driflyte acts as a bridge between diverse, topic-aware content sources (web, GitHub, and more) and AI-powered reasoning, enabling richer, more accurate answers.
What It Does
- Deep Web Crawling: Recursively follows links to crawl and index web pages.
- GitHub Integration: Crawls repositories, issues, and discussions.
- Extensible Resource Support: Future support planned for Slack, Microsoft Teams, Google Docs/Drive, Confluence, JIRA, Zendesk, Salesforce, and more.
- Topic-Aware Indexing: Each document is tagged with one or more topics, enabling targeted, topic-specific retrieval.
- Designed for RAG with RAG: The server itself is built with Retrieval-Augmented Generation (RAG) in mind, and it powers RAG workflows by providing assistants with high-quality, topic-specific documents as grounding context.
- Designed for AI with AI: The system is not just for AI assistants — it is also designed and evolved using AI itself, making it an AI-native component for intelligent knowledge retrieval.
Usage & Limits
- Free Access: Driflyte is currently free to use.
- No Signup Required: You can start using it immediately — no registration or subscription needed.
- Rate Limits: To ensure fair usage, requests are limited by IP:
100API requests per5minutes per IP address.
- Future changes to usage policies and limits may be introduced as new features and resource integrations become available.
Prerequisites
- Node.js 18+
- An AI assistant (with MCP client) like Cursor, Claude (Desktop or Code), VS Code, Windsurf, etc ...
Configurations
CLI Arguments
Driflyte MCP server supports the following CLI arguments for configuration:
--transport <stdio|streamable-http>- Configures the transport protocol (defaults tostdio).--port <number>– Configures the port number to listen on when usingstreamable-httptransport (defaults to3000).
Quick Start
This MCP server (using STDIO or Streamable HTTP transport) can be added to any MCP Client
like VS Code, Claude, Cursor, Windsurf Github Copilot via the @driflyte/mcp-server NPM package.
ChatGPT
- Navigate to
Settingsunder your profile and enableDeveloper Modeunder theConnectorsoption. - In the chat panel, click the
+icon, and from the dropdown, selectDeveloper Mode. You’ll see an option to add sources/connectors. - Enter the following MCP Server details and then click
Create:Name:DriflyteMCP Server URL:https://mcp.driflyte.com/openaiAuthentication:No authenticationTrust Setting: CheckI trust this application
See How to set up a remote MCP server and connect it to ChatGPT deep research and MCP server tools now in ChatGPT – developer mode for more info.
Claude Code
Run the following command. See Claude Code MCP docs for more info.
Local Server
claude mcp add driflyte -- npx -y @driflye/mcp-server
Remote Server
claude mcp add --transport http driflyte https://mcp.driflyte.com/mcp
Claude Desktop
Local Server
Add the following configuration into the claude_desktop_config.json file.
See the Claude Desktop MCP docs for more info.
{
"mcpServers": {
"driflyte": {
"command": "npx",
"args": ["-y", "@driflyte/mcp-server"]
}
}
}
Remote Server
Go to the Settings > Connectors > Add Custom Connector in the Claude Desktop and add the new MCP server with the following fields:
- Name:
Driflyte - Remote MCP server URL:
https://mcp.driflyte.com/mcp
Copilot Coding Agent
Add the following configuration to the mcpServers section of your Copilot Coding Agent configuration through
Repository > Settings > Copilot > Coding agent > MCP configuration.
See the Copilot Coding Agent MCP docs for more info.
Local Server
{
"mcpServers": {
"driflyte": {
"type": "local",
"command": "npx",
"args": ["-y", "@driflyte/mcp-server"]
}
}
}
Remote Server
{
"mcpServers": {
"driflyte": {
"type": "http",
"url": "https://mcp.driflyte.com/mcp"
}
}
}
Cursor
Add the following configuration into the ~/.cursor/mcp.json file (or .cursor/mcp.json in your project folder).
Or setup by 🖱️One Click Installation.
See the Cursor MCP docs for more info.
Local Server
{
"mcpServers": {
"driflyte": {
"command": "npx",
"args": ["-y", "@driflyte/mcp-server"]
}
}
}
Remote Server
{
"mcpServers": {
"driflyte": {
"url": "https://mcp.driflyte.com/mcp"
}
}
}
Gemini CLI
Add the following configuration into the ~/.gemini/settings.json file.
See the Gemini CLI MCP docs for more info.
Local Server
{
"mcpServers": {
"driflyte": {
"command": "npx",
"args": ["-y", "@driflyte/mcp-server"]
}
}
}
Remote Server
{
"mcpServers": {
"driflyte": {
"httpUrl": "https://mcp.driflyte.com/mcp"
}
}
}
Smithery
Run the following command. You can find your Smithery API key here. See the Smithery CLI docs for more info.
npx -y @smithery/cli install @serkan-ozal/driflyte-mcp-server --client <SMITHERY-CLIENT-NAME> --key <SMITHERY-API-KEY>
VS Code
Add the following configuration into the .vscode/mcp.json file.
Or setup by 🖱️One Click Installation.
See the VS Code MCP docs for more info.
Local Server
{
"mcp": {
"servers": {
"driflyte": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@driflyte/mcp-server"]
}
}
}
}
Remote Server
{
"mcp": {
"servers": {
"driflyte": {
"type": "http",
"url": "https://mcp.driflyte.com/mcp"
}
}
}
}
Windsurf
Add the following configuration into the ~/.codeium/windsurf/mcp_config.json file.
See the Windsurf MCP docs for more info.
Local Server
{
"mcpServers": {
"driflyte": {
"command": "npx",
"args": ["-y", "@driflyte/mcp-server"]
}
}
}
Remote Server
{
"mcpServers": {
"driflyte": {
"serverUrl": "https://mcp.driflyte.com/mcp"
}
}
}
Components
Tools
list-topics: Returns a list of topics for which resources (web pages, etc ...) have been crawled and content is available. This allows AI assistants to discover the most relevant and up-to-date subject areas currently indexed by the crawler.- Input Schema: No input parameter supported.
- Output Schema:
topics:Optinal:falseType:Array<string>Description: List of the supported topics.
search: Given a list of topics and a user question, this tool retrieves the top-K most relevant documents from the crawled content. It is designed to help AI assistants surface the most contextually appropriate and up-to-date information for a specific topic and query. This enables more informed and accurate responses based on real-world, topic-tagged web content.- Input Schema:
topicsOptinal:falseType:Array<string>Description: A list of one or more topic identifiers to constrain the search space. Only documents tagged with at least one of these topics will be considered.
queryOptinal:falseType:stringDescription: The natural language query or question for which relevant information is being sought. This will be used to rank documents by semantic relevance.
topKOptinal:trueType:numberDefault Value:10Min Value:1Max Value:30Description: The maximum number of relevant documents to return. Results are sorted by descending relevance score.
- Output Schema:
documents:Optional:falseType:Array<Document>Description: Matched documents to the search query.- Type:
Document:contentOptinal:falseType:stringDescription: Related content (full or partial) of the matched document.
metadataOptinal:falseType:Map<string, any>Description: Metadata of the document and related content in key-value format.
scoreOptinal:falseType:numberMin Value:0Max Value:1Description: Similarity score (between0and1) for the content of the document.
- Input Schema:
Resources
N/A
Roadmap
- Support more content types (
.pdf,.ppt/.pptx,.doc/.docx, and many others applicable including audio and video file formats ...) - Integrate with more data sources (Slack, Teams, Google Docs/Drive, Confluence, JIRA, Zendesk, Salesforce, etc ...))
- And more topics with their resources
Issues and Feedback
Please use GitHub Issues for any bug report, feature request and support.
Contribution
If you would like to contribute, please
- Fork the repository on GitHub and clone your fork.
- Create a branch for your changes and make your changes on it.
- Send a pull request by explaining clearly what is your contribution.
Tip: Please check the existing pull requests for similar contributions and consider submit an issue to discuss the proposed feature before writing code.
License
Licensed under MIT.
Related Servers
WebSearch
An advanced web search and content extraction tool powered by the Firecrawl API for web scraping and analysis.
Xiaohongshu Search & Comment
An automated tool to search notes, retrieve content, and post comments on Xiaohongshu (RedBook) using Playwright.
Playwright MCP
Browser automation using Playwright, enabling LLMs to interact with web pages through structured accessibility snapshots.
Cloudflare Playwright
Control a browser for web automation tasks like navigation, typing, clicking, and taking screenshots using Playwright on Cloudflare Workers.
Puppeteer
Provides browser automation using Puppeteer, enabling interaction with web pages, taking screenshots, and executing JavaScript.
Headline Vibes Analysis
Analyzes the sentiment of news headlines from major US publications using the NewsAPI.
Browser Use
Enables AI agents to control web browsers using natural language commands.
JinaAI Reader
Extracts web content using the Jina.ai Reader API.
Redbook Search & Comment Tool
An automated tool to search notes, analyze content, and post AI-generated comments on Xiaohongshu (Redbook) using Playwright.
Read URL MCP
Extracts web content from a URL and converts it to clean Markdown format.