RAGify Docs

A Developers Tool — Scrape entire documentation recursively and ask questions using AI

📚 RAGify Docs API & MCP Server

A Developer's Tool for Interactive Documentation

Scrape entire documentation recursively and ask AI-powered questions using Retrieval-Augmented Generation (RAG)

Python FastAPI LangChain Groq MCP


🎯 Overview

RAGify Docs is a comprehensive tool that helps developers quickly navigate and understand documentation by combining web scraping, vector embeddings, and AI-powered question answering. Instead of manually reading through documentation, simply provide a URL and ask questions—RAGify will find the most relevant answers backed by actual documentation content.

✨ Key Features

  • 🕷️ Recursive Web Scraping - Automatically traverse and extract content from entire documentation websites
  • 🧠 Vector Embeddings - Convert documentation into semantic embeddings using HuggingFace models
  • 🎯 Smart Retrieval - Use Max Marginal Relevance (MMR) to fetch diverse and relevant context
  • 🤖 AI-Powered Answers - Leverage Groq's fast language models for accurate responses
  • Intelligent Caching - Reuse embeddings across multiple queries on the same documentation
  • 🔌 Multiple Interfaces - Access via REST API, MCP Server, or direct Python module
  • 📍 Source Attribution - Get links to the exact documentation pages used to answer your questions
  • 🚀 Production-Ready - Built with FastAPI and async support for scalable deployments

🏗️ Project Structure

RAGify-Docs-API/
├── main.py              # Core RAG engine - documentation scraping & question answering
├── app.py               # FastAPI REST API server
├── mcp_server.py        # MCP (Model Context Protocol) server for Claude/AI integrations
├── pyproject.toml       # Project metadata and dependencies
├── requirements.txt     # Python package requirements
└── README.md            # This file

Component Architecture

┌─────────────────────────────────────────────────────────┐
│                    RAGify Docs API                      │
├─────────────────────────────────────────────────────────┤
│                                                         │
│  ┌──────────────┐  ┌──────────────┐  ┌─────────────┐  │
│  │  FastAPI     │  │  MCP Server  │  │   Python    │  │
│  │  (/ragify)   │  │ (ask_docs)   │  │   Module    │  │
│  └──────┬───────┘  └──────┬───────┘  └──────┬──────┘  │
│         │                 │                  │         │
│         └─────────────────┼──────────────────┘         │
│                           │                           │
│                    ┌──────▼───────┐                   │
│                    │   main.py    │                   │
│                    │  (RAG Core)  │                   │
│                    └──────┬───────┘                   │
│                           │                           │
│         ┌─────────────────┼─────────────────┐         │
│         │                 │                 │         │
│    ┌────▼────┐      ┌─────▼──────┐   ┌────▼─────┐  │
│    │ Scraper │      │ Embeddings │   │    LLM   │  │
│    │ (URL)   │      │  (HF)      │   │  (Groq) │  │
│    └────┬────┘      └─────┬──────┘   └────┬─────┘  │
│         │                 │                │       │
│         └─────────────────┼────────────────┘       │
│                           │                        │
│                    ┌──────▼────────┐              │
│                    │ Cache Storage │              │
│                    │    (In-Mem)   │              │
│                    └───────────────┘              │
│                                                   │
└─────────────────────────────────────────────────────┘

🚀 Installation

Prerequisites

  • Python 3.12+
  • pip or uv package manager
  • API keys for Groq (optional alternative: use Ollama locally)

Setup Steps

  1. Clone the repository

    git clone <repository-url>
    cd RAGify-Docs-API
    
  2. Create a virtual environment

    python -m venv .venv
    .venv\Scripts\activate  # Windows
    # or
    source .venv/bin/activate  # macOS/Linux
    
  3. Install dependencies

    pip install -r requirements.txt
    # or using uv
    uv sync
    
  4. Create a .env file (Optional - for API keys)

    GROQ_API_KEY=your_groq_api_key_here
    

📖 Usage

Option 1: FastAPI REST API

Start the server:

uvicorn app:app --reload --host 0.0.0.0 --port 8000

Make a request:

curl -X POST "http://localhost:8000/ragify" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://docs.langchain.com/oss/python/langchain/overview",
    "query": "What is LangChain?"
  }'

Python example:

import requests

response = requests.post(
    "http://localhost:8000/ragify",
    json={
        "url": "https://docs.python.org/3/",
        "query": "How do I create a list?"
    }
)

print(response.json())
# {
#     "answer": "...",
#     "sources": ["https://docs.python.org/3/..."]
# }

API Documentation:

  • Interactive docs: http://localhost:8000/docs (Swagger UI)
  • ReDoc: http://localhost:8000/redoc

Option 2: MCP Server

Start the MCP server:

python mcp_server.py

Default configuration:

  • Host: 0.0.0.0
  • Port: 8000 (or from PORT env variable)
  • Transport: HTTP Streamable

Option 3: Direct Python Module

Use RAGify in your own Python code:

from main import main

# Initialize RAG for a documentation URL
rag_chain = main("https://docs.langchain.com/oss/python/langchain/overview")

# Ask questions
response = rag_chain.invoke({
    "input": "What is a retriever in LangChain?"
})

print(response["answer"])
print(response["context"])  # List of source documents

🔑 Configuration

Environment Variables

# Groq API Configuration
GROQ_API_KEY=your_key_here
GROQ_MODEL=openai/gpt-oss-120b

# Or use Ollama instead of Groq (local inference)
# Uncomment in main.py: llm = ChatOllama(model="your-model")

# MCP Server Port
PORT=8000

Customization in main.py

Chunk size and overlap:

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,      # Increase for longer contexts
    chunk_overlap=200     # Increase for better continuity
)

Embedding model:

embeddings = HuggingFaceEmbeddings(
    model_name="sentence-transformers/all-MiniLM-L6-v2"
    # Or use: "all-mpnet-base-v2" (larger, more accurate)
)

Retrieval parameters:

retriever = vector_store.as_retriever(
    search_type="mmr",
    search_kwargs={
        "k": 5,           # Number of results to return
        "fetch_k": 10,    # Candidates to consider
        "lambda_mult": 0.5 # Balances similarity vs diversity
    }
)

LLM selection:

# Use Groq (fast, requires API key)
llm = ChatGroq(model="openai/gpt-oss-120b", temperature=0.2)

# OR use Ollama locally (no API key needed)
# llm = ChatOllama(model="llama2", temperature=0.2)

📋 API Reference

FastAPI Endpoints

POST /ragify

Ask a question about documentation.

Request:

{
  "url": "https://docs.example.com",
  "query": "How do I get started?"
}

Response:

{
  "answer": "To get started with Example...",
  "sources": [
    "https://docs.example.com/getting-started",
    "https://docs.example.com/installation"
  ]
}

Status Codes:

  • 200 - Success
  • 500 - RAG initialization or invocation error

GET /

Health check and welcome message.

Response:

{
  "message": "Welcome to the RAGify Docs API! Use the /ragify endpoint to ask questions about documentation."
}

MCP Tool: ask_docs

Accessible through MCP clients (Claude, etc.)

Parameters:

  • url (string): Documentation URL to scrape
  • query (string): Question to ask

Returns:

{
  "answer": "...",
  "sources": ["url1", "url2"]
}

Or on error:

{
  "error": "Error message"
}

🚀 Deployment

Docker (Optional)

FROM python:3.12-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]

Build and run:

docker build -t ragify-docs-api .
docker run -p 8000:8000 -e GROQ_API_KEY=your_key ragify-docs-api

Built with ❤️ for developers who love great documentation

⭐ If you found this useful, please star the repository!

İlgili Sunucular

NotebookLM Web Importer

Web sayfalarını ve YouTube videolarını tek tıkla NotebookLM'e aktarın. 200.000'den fazla kullanıcı tarafından güveniliyor.

Chrome Eklentisini Yükle