RAGify Docs
A Developers Tool β Scrape entire documentation recursively and ask questions using AI
π RAGify Docs API & MCP Server
A Developer's Tool for Interactive Documentation
Scrape entire documentation recursively and ask AI-powered questions using Retrieval-Augmented Generation (RAG)
π― Overview
RAGify Docs is a comprehensive tool that helps developers quickly navigate and understand documentation by combining web scraping, vector embeddings, and AI-powered question answering. Instead of manually reading through documentation, simply provide a URL and ask questionsβRAGify will find the most relevant answers backed by actual documentation content.
β¨ Key Features
- π·οΈ Recursive Web Scraping - Automatically traverse and extract content from entire documentation websites
- π§ Vector Embeddings - Convert documentation into semantic embeddings using HuggingFace models
- π― Smart Retrieval - Use Max Marginal Relevance (MMR) to fetch diverse and relevant context
- π€ AI-Powered Answers - Leverage Groq's fast language models for accurate responses
- β‘ Intelligent Caching - Reuse embeddings across multiple queries on the same documentation
- π Multiple Interfaces - Access via REST API, MCP Server, or direct Python module
- π Source Attribution - Get links to the exact documentation pages used to answer your questions
- π Production-Ready - Built with FastAPI and async support for scalable deployments
ποΈ Project Structure
RAGify-Docs-API/
βββ main.py # Core RAG engine - documentation scraping & question answering
βββ app.py # FastAPI REST API server
βββ mcp_server.py # MCP (Model Context Protocol) server for Claude/AI integrations
βββ pyproject.toml # Project metadata and dependencies
βββ requirements.txt # Python package requirements
βββ README.md # This file
Component Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β RAGify Docs API β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββ βββββββββββββββ β
β β FastAPI β β MCP Server β β Python β β
β β (/ragify) β β (ask_docs) β β Module β β
β ββββββββ¬ββββββββ ββββββββ¬ββββββββ ββββββββ¬βββββββ β
β β β β β
β βββββββββββββββββββΌβββββββββββββββββββ β
β β β
β ββββββββΌββββββββ β
β β main.py β β
β β (RAG Core) β β
β ββββββββ¬ββββββββ β
β β β
β βββββββββββββββββββΌββββββββββββββββββ β
β β β β β
β ββββββΌβββββ βββββββΌβββββββ ββββββΌββββββ β
β β Scraper β β Embeddings β β LLM β β
β β (URL) β β (HF) β β (Groq) β β
β ββββββ¬βββββ βββββββ¬βββββββ ββββββ¬ββββββ β
β β β β β
β βββββββββββββββββββΌβββββββββββββββββ β
β β β
β ββββββββΌβββββββββ β
β β Cache Storage β β
β β (In-Mem) β β
β βββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π Installation
Prerequisites
- Python 3.12+
- pip or uv package manager
- API keys for Groq (optional alternative: use Ollama locally)
Setup Steps
-
Clone the repository
git clone <repository-url> cd RAGify-Docs-API -
Create a virtual environment
python -m venv .venv .venv\Scripts\activate # Windows # or source .venv/bin/activate # macOS/Linux -
Install dependencies
pip install -r requirements.txt # or using uv uv sync -
Create a
.envfile (Optional - for API keys)GROQ_API_KEY=your_groq_api_key_here
π Usage
Option 1: FastAPI REST API
Start the server:
uvicorn app:app --reload --host 0.0.0.0 --port 8000
Make a request:
curl -X POST "http://localhost:8000/ragify" \
-H "Content-Type: application/json" \
-d '{
"url": "https://docs.langchain.com/oss/python/langchain/overview",
"query": "What is LangChain?"
}'
Python example:
import requests
response = requests.post(
"http://localhost:8000/ragify",
json={
"url": "https://docs.python.org/3/",
"query": "How do I create a list?"
}
)
print(response.json())
# {
# "answer": "...",
# "sources": ["https://docs.python.org/3/..."]
# }
API Documentation:
- Interactive docs:
http://localhost:8000/docs(Swagger UI) - ReDoc:
http://localhost:8000/redoc
Option 2: MCP Server
Start the MCP server:
python mcp_server.py
Default configuration:
- Host:
0.0.0.0 - Port:
8000(or fromPORTenv variable) - Transport: HTTP Streamable
Option 3: Direct Python Module
Use RAGify in your own Python code:
from main import main
# Initialize RAG for a documentation URL
rag_chain = main("https://docs.langchain.com/oss/python/langchain/overview")
# Ask questions
response = rag_chain.invoke({
"input": "What is a retriever in LangChain?"
})
print(response["answer"])
print(response["context"]) # List of source documents
π Configuration
Environment Variables
# Groq API Configuration
GROQ_API_KEY=your_key_here
GROQ_MODEL=openai/gpt-oss-120b
# Or use Ollama instead of Groq (local inference)
# Uncomment in main.py: llm = ChatOllama(model="your-model")
# MCP Server Port
PORT=8000
Customization in main.py
Chunk size and overlap:
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, # Increase for longer contexts
chunk_overlap=200 # Increase for better continuity
)
Embedding model:
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-v2"
# Or use: "all-mpnet-base-v2" (larger, more accurate)
)
Retrieval parameters:
retriever = vector_store.as_retriever(
search_type="mmr",
search_kwargs={
"k": 5, # Number of results to return
"fetch_k": 10, # Candidates to consider
"lambda_mult": 0.5 # Balances similarity vs diversity
}
)
LLM selection:
# Use Groq (fast, requires API key)
llm = ChatGroq(model="openai/gpt-oss-120b", temperature=0.2)
# OR use Ollama locally (no API key needed)
# llm = ChatOllama(model="llama2", temperature=0.2)
π API Reference
FastAPI Endpoints
POST /ragify
Ask a question about documentation.
Request:
{
"url": "https://docs.example.com",
"query": "How do I get started?"
}
Response:
{
"answer": "To get started with Example...",
"sources": [
"https://docs.example.com/getting-started",
"https://docs.example.com/installation"
]
}
Status Codes:
200- Success500- RAG initialization or invocation error
GET /
Health check and welcome message.
Response:
{
"message": "Welcome to the RAGify Docs API! Use the /ragify endpoint to ask questions about documentation."
}
MCP Tool: ask_docs
Accessible through MCP clients (Claude, etc.)
Parameters:
url(string): Documentation URL to scrapequery(string): Question to ask
Returns:
{
"answer": "...",
"sources": ["url1", "url2"]
}
Or on error:
{
"error": "Error message"
}
π Deployment
Docker (Optional)
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
Build and run:
docker build -t ragify-docs-api .
docker run -p 8000:8000 -e GROQ_API_KEY=your_key ragify-docs-api
Built with β€οΈ for developers who love great documentation
β If you found this useful, please star the repository!
Related Servers
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
YouTube Transcript
A zero-setup server to extract transcripts from YouTube videos on any platform.
Stepstone
Fetches job listings from Stepstone.de based on keywords and location parameters.
Amazon Scraper API
An MCP server that connects AI agents to Amazon product, search, and review data across 20 marketplaces via the ChocoData Amazon Scraper API.
Browser MCP
A fast, lightweight MCP server that empowers LLMs with browser automation via Puppeteerβs structured accessibility data, featuring optional vision mode for complex visual understanding and flexible, cross-platform configuration.
WebDriverIO
Automate web browsers using WebDriverIO. Supports actions like clicking, filling forms, and taking screenshots.
powhttp-mcp
MCP server enabling agents to debug HTTP requests better
just-every/mcp-screenshot-website-fast
High-quality screenshot capture optimized for Claude Vision API. Automatically tiles full pages into 1072x1072 chunks (1.15 megapixels) with configurable viewports and wait strategies for dynamic content.
comet-mcp
Connect Claude Code to Perplexity Comet browser for agentic web browsing, deep research, and real-time task monitoring
scrape-do-mcp
MCP Server for Scrape.do - Web Scraping & Google Search with anti-bot bypass
Crawl4AI RAG
Integrate web crawling and Retrieval-Augmented Generation (RAG) into AI agents and coding assistants.