CrawlAPI
Scrape any URL with JavaScript rendering and get back clean markdown — built for AI agents, LLM pipelines, and autonomous research workflows.
crawlapi-js
Official JavaScript/TypeScript SDK for CrawlAPI — web scraping API built for AI agents.
Pass a URL, get back clean markdown your LLM can actually read.
Install
npm install crawlapi-js
Quick Start
const CrawlAPI = require('crawlapi-js');
const client = new CrawlAPI({ apiKey: 'YOUR_RAPIDAPI_KEY' });
// Scrape a URL → clean markdown
const result = await client.scrape('https://example.com');
console.log(result.data.markdown);
// Scrape multiple URLs in parallel
const batch = await client.batch([
'https://example.com',
'https://example.org'
]);
// Search + scrape top results in one call
const search = await client.search('LangChain web scraping 2025', { num: 5 });
Get an API Key
Available on RapidAPI. Free tier: 50 calls/day, no credit card.
API
client.scrape(url, options?)
Scrape a single URL with full JavaScript rendering.
const result = await client.scrape('https://example.com', {
formats: ['markdown'], // 'markdown' | 'html' | 'text' | 'structured'
waitFor: 1000, // ms to wait after page load (JS-heavy pages)
timeout: 30000 // max ms (default 30s, max 60s)
});
console.log(result.data.markdown);
console.log(result.data.metadata.title);
client.batch(urls, options?)
Scrape up to 10 URLs in parallel. Failed URLs return an error field rather than failing the whole request.
const result = await client.batch([
'https://example.com',
'https://example.org',
'https://example.net'
]);
result.data.forEach(item => {
if (item.success) console.log(item.data.markdown);
else console.error(item.url, item.error);
});
client.search(query, options?)
Search the web and automatically scrape the top N results.
const result = await client.search('best pizza in New York', {
num: 5, // 1-10 results (default 5)
formats: ['markdown']
});
result.data.forEach(item => {
console.log(item.title, item.url);
if (item.success) console.log(item.data.markdown);
});
Agent Framework Examples
LangChain
const { Tool } = require('langchain/tools');
const CrawlAPI = require('crawlapi-js');
const client = new CrawlAPI({ apiKey: process.env.RAPIDAPI_KEY });
const webTool = new Tool({
name: 'WebScraper',
description: 'Scrape any URL and return clean markdown content. Input should be a URL.',
func: async (url) => {
const result = await client.scrape(url);
return result.data.markdown;
}
});
OpenAI Function Calling
const CrawlAPI = require('crawlapi-js');
const client = new CrawlAPI({ apiKey: process.env.RAPIDAPI_KEY });
// Tool definition
const tools = [{
type: 'function',
function: {
name: 'scrape_url',
description: 'Scrape a webpage and return its content as clean markdown',
parameters: {
type: 'object',
properties: {
url: { type: 'string', description: 'The URL to scrape' }
},
required: ['url']
}
}
}];
// Handle tool call
async function handleToolCall(url) {
const result = await client.scrape(url);
return result.data.markdown;
}
n8n / Flowise / Custom HTTP
All endpoints accept standard JSON POST requests:
POST https://crawlapi.net/v1/scrape
X-RapidAPI-Key: YOUR_KEY
Content-Type: application/json
{ "url": "https://example.com", "formats": ["markdown"] }
Full OpenAPI spec: crawlapi.net/openapi.json
TypeScript
Full TypeScript types included:
import { CrawlAPI, ScrapeResponse } from 'crawlapi-js';
const client = new CrawlAPI({ apiKey: process.env.RAPIDAPI_KEY! });
const result: ScrapeResponse = await client.scrape('https://example.com');
MCP Server (Claude Desktop, Cursor, Windsurf)
CrawlAPI ships with a built-in MCP server — plug it directly into any MCP-compatible AI client.
Setup
1. Clone or download mcp-server.js from this repo.
2. Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on Mac):
{
"mcpServers": {
"crawlapi": {
"command": "node",
"args": ["/path/to/crawlapi-js/mcp-server.js"],
"env": {
"CRAWLAPI_KEY": "your_rapidapi_key"
}
}
}
}
3. Restart Claude Desktop. You'll see three new tools:
scrape_url— scrape any page to markdownbatch_scrape— scrape up to 10 URLs in parallelsearch_and_scrape— search + scrape top results in one shot
Cursor / Windsurf / Continue
Same config, different location. Check your editor's MCP docs. The mcp-server.js file works with any stdio-transport MCP client.
What the agent sees
"Use
scrape_urlto fetch web pages as clean markdown. Usesearch_and_scrapeto research topics from the live web. Both support JavaScript-rendered pages."
Get your API key at RapidAPI — free tier included.
AI Agent Discovery
CrawlAPI publishes all standard discovery files:
| File | URL |
|---|---|
llms.txt | crawlapi.net/llms.txt |
| OpenAPI spec | crawlapi.net/openapi.json |
| AI Plugin manifest | crawlapi.net/.well-known/ai-plugin.json |
| MCP manifest | crawlapi.net/.well-known/mcp.json |
| APIs.json | crawlapi.net/apis.json |
Links
เซิร์ฟเวอร์ที่เกี่ยวข้อง
Bright Data
ผู้สนับสนุนDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Browser MCP
A fast, lightweight MCP server that empowers LLMs with browser automation via Puppeteer’s structured accessibility data, featuring optional vision mode for complex visual understanding and flexible, cross-platform configuration.
Fetch as Markdown MCP Server
Fetches web pages and converts them to clean markdown, focusing on main content extraction.
Web Fetch
Fetches and converts web content, ideal for data extraction and web scraping.
YouTube Insights MCP Server
Extract insights from YouTube videos, including subtitles, video discovery, and channel information.
rippr
YouTube transcript extraction for AI agents. Clean text, timestamps, or structured JSON from any video. No API keys required.
Financial Data MCP Server
Provides real-time financial market data from Yahoo Finance.
Crypto News MCP Server
Fetches the latest cryptocurrency news and converts article content from HTML to Markdown.
Website to Markdown MCP Server
Fetches and converts website content to Markdown with AI-powered cleanup, OpenAPI support, and stealth browsing.
GitPrism
GitPrism is a fast, token-efficient, stateless pipeline that converts public GitHub repositories into LLM-ready Markdown.
Crawl4AI
Web scraping skill for Claude AI. Crawl websites, extract structured data with CSS/LLM strategies, handle dynamic JavaScript content. Built on crawl4ai with complete SDK reference, example scripts, and tests.