Fast, token-efficient web content extraction that converts websites to clean Markdown. Features Mozilla Readability, smart caching, polite crawling with robots.txt support, and concurrent fetching with minimal dependencies.
Fast, token-efficient web content extraction for AI agents - converts websites to clean Markdown.
Existing MCP web crawlers are slow and consume large quantities of tokens. This pauses the development process and provides incomplete results as LLMs need to parse whole web pages.
This MCP package fetches web pages locally, strips noise, and converts content to clean Markdown while preserving links. Designed for Claude Code, IDEs and LLM pipelines with minimal token footprint. Crawl sites locally with minimal dependencies.
Note: This package now uses @just-every/crawl for its core crawling and markdown conversion functionality.
claude mcp add read-website-fast -s user -- npx -y @just-every/mcp-read-website-fast
code --add-mcp '{"name":"read-website-fast","command":"npx","args":["-y","@just-every/mcp-read-website-fast"]}'
cursor://anysphere.cursor-deeplink/mcp/install?name=read-website-fast&config=eyJyZWFkLXdlYnNpdGUtZmFzdCI6eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsIkBqdXN0LWV2ZXJ5L21jcC1yZWFkLXdlYnNpdGUtZmFzdCJdfX0=
Settings → Tools → AI Assistant → Model Context Protocol (MCP) → Add
Choose “As JSON” and paste:
{"command":"npx","args":["-y","@just-every/mcp-read-website-fast"]}
Or, in the chat window, type /add and fill in the same JSON—both paths land the server in a single step. 
{
"mcpServers": {
"read-website-fast": {
"command": "npx",
"args": ["-y", "@just-every/mcp-read-website-fast"]
}
}
}
Drop this into your client’s mcp.json (e.g. .vscode/mcp.json, ~/.cursor/mcp.json, or .mcp.json for Claude).
read_website
- Fetches a webpage and converts it to clean markdown
url
(required): The HTTP/HTTPS URL to fetchpages
(optional): Maximum number of pages to crawl (default: 1, max: 100)read-website-fast://status
- Get cache statisticsread-website-fast://clear-cache
- Clear the cache directorynpm install
npm run build
npm run dev fetch https://example.com/article
npm run dev fetch https://example.com --depth 2 --concurrency 5
# Markdown only (default)
npm run dev fetch https://example.com
# JSON output with metadata
npm run dev fetch https://example.com --output json
# Both URL and markdown
npm run dev fetch https://example.com --output both
-p, --pages <number>
- Maximum number of pages to crawl (default: 1)-c, --concurrency <number>
- Max concurrent requests (default: 3)--no-robots
- Ignore robots.txt--all-origins
- Allow cross-origin crawling-u, --user-agent <string>
- Custom user agent--cache-dir <path>
- Cache directory (default: .cache)-t, --timeout <ms>
- Request timeout in milliseconds (default: 30000)-o, --output <format>
- Output format: json, markdown, or both (default: markdown)npm run dev clear-cache
The MCP server includes automatic restart capability by default for improved reliability:
For development/debugging without auto-restart:
# Run directly without restart wrapper
npm run serve:dev
mcp/
├── src/
│ ├── crawler/ # URL fetching, queue management, robots.txt
│ ├── parser/ # DOM parsing, Readability, Turndown conversion
│ ├── cache/ # Disk-based caching with SHA-256 keys
│ ├── utils/ # Logger, chunker utilities
│ ├── index.ts # CLI entry point
│ ├── serve.ts # MCP server entry point
│ └── serve-restart.ts # Auto-restart wrapper
# Run in development mode
npm run dev fetch https://example.com
# Build for production
npm run build
# Run tests
npm test
# Type checking
npm run typecheck
# Linting
npm run lint
Contributions are welcome! Please:
npm run dev clear-cache
-t
flag-u
flagMIT
Fetches horse racing news from the thoroughbreddailynews.com RSS feed.
A server for browser automation using Playwright, providing powerful tools for web scraping, testing, and automation.
An MCP server for the Kakuyomu novel posting site, enabling users to search for works, retrieve episode lists, and read content.
Scrape Weibo user information, feeds, and perform searches.
A fast, lightweight MCP server that empowers LLMs with browser automation via Puppeteer’s structured accessibility data, featuring optional vision mode for complex visual understanding and flexible, cross-platform configuration.
Fetches cigarette data and information from Yanyue.cn.
Discover, extract, and interact with the web - one interface powering automated access across the public internet.
Use 3,000+ pre-built cloud tools to extract data from websites, e-commerce, social media, search engines, maps, and more
An MCP server using Playwright for browser automation and webscrapping
Integrate real-time Scrapeless Google SERP(Google Search, Google Flight, Google Map, Google Jobs....) results into your LLM applications. This server enables dynamic context retrieval for AI workflows, chatbots, and research tools.