UseScraper
A server for web scraping using the UseScraper API.
UseScraper MCP Server
This is a TypeScript-based MCP server that provides web scraping capabilities using the UseScraper API. It exposes a single tool 'scrape' that can extract content from web pages in various formats.
Features
Tools
scrape- Extract content from a webpage- Parameters:
url(required): The URL of the webpage to scrapeformat(optional): The format to save the content (text, html, markdown). Default: markdownadvanced_proxy(optional): Use advanced proxy to circumvent bot detection. Default: falseextract_object(optional): Object specifying data to extract
- Parameters:
Installation
Installing via Smithery
To install UseScraper for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install usescraper-server --client claude
Manual Installation
-
Clone the repository:
git clone https://github.com/your-repo/usescraper-server.git cd usescraper-server -
Install dependencies:
npm install -
Build the server:
npm run build
Configuration
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"usescraper-server": {
"command": "node",
"args": ["/path/to/usescraper-server/build/index.js"],
"env": {
"USESCRAPER_API_KEY": "your-api-key-here"
}
}
}
}
Replace /path/to/usescraper-server with the actual path to the server and your-api-key-here with your UseScraper API key.
Usage
Once configured, you can use the 'scrape' tool through the MCP interface. Example usage:
{
"name": "scrape",
"arguments": {
"url": "https://example.com",
"format": "markdown"
}
}
Development
For development with auto-rebuild:
npm run watch
Debugging
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
Serveurs connexes
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Puppeteer
A server for browser automation using Puppeteer, enabling web scraping, screenshots, and JavaScript execution.
Fetcher MCP
Fetch and extract web content using a Playwright headless browser, with support for intelligent extraction and flexible output.
Documentation Crawler
Crawl websites to generate Markdown documentation and make it searchable through an MCP server.
MCP Node Fetch
Fetch web content using the Node.js undici library.
302AI BrowserUse
An AI-powered browser automation server for natural language control and web research.
Outscraper MCP Server
Access Google Maps data, reviews, AI-structured insights, and business leads through the Outscraper MCP server, designed for seamless integration with AI agents and automation workflows.
Haunt API
AI-powered web data extraction MCP server — extract structured JSON from any website with natural language prompts.
Skrapr
An intelligent web scraping tool using AI and browser automation to extract structured data from websites.
Jina Reader
Fetch the content of a remote URL as Markdown with Jina Reader.
CrawlAPI
Scrape any URL with JavaScript rendering and get back clean markdown — built for AI agents, LLM pipelines, and autonomous research workflows.