Scrape websites with Oxylabs Web API, supporting dynamic rendering and parsing for structured data extraction.
The Oxylabs MCP server provides a bridge between AI models and the web. It enables them to scrape any URL, render JavaScript-heavy pages, extract and format content for AI use, bypass anti-scraping measures, and access geo-restricted web data from 195+ countries.
This implementation leverages the Model Context Protocol (MCP) to create a secure, standardized way for AI assistants to interact with web content.
Imagine telling your LLM "Summarise the latest Hacker News discussion about GPT‑7" – and it simply answers.
MCP (Multi‑Client Proxy) makes that happen by doing the boring parts for you:
What Oxylabs MCP does | Why it matters to you |
---|---|
Bypasses anti‑bot walls with the Oxylabs global proxy network | Keeps you unblocked and anonymous |
Renders JavaScript in headless Chrome | Single‑page apps, sorted |
Cleans HTML → JSON | Drop straight into vector DBs or prompts |
Optional structured parsers (Google, Amazon, etc.) | One‑line access to popular targets |
When you've set up the MCP server with Claude, you can make requests like:
https://www.google.com/search?q=ai
page?https://www.amazon.de/-/en/Smartphone-Contract-Function-Manufacturer-Exclusive/dp/B0CNKD651V
with parse enabledhttps://www.amazon.de/-/en/gp/bestsellers/beauty/ref=zg_bs_nav_beauty_0
with parse and render enabledhttps://www.bestbuy.com/site/top-deals/all-electronics-on-sale/pcmcat1674241939957.c
Before you begin, make sure you have:
Via Smithery CLI:
npx
command-line toolVia uv:
uv
package manager – install it using this guideuv
package manager – install it using this guideThe Oxylabs MCP Universal Scraper accepts these parameters:
Parameter | Description | Values |
---|---|---|
url | The URL to scrape | Any valid URL |
render | Use headless browser rendering | html or None |
geo_location | Sets the proxy's geo location to retrieve data. | Brasil , Canada , etc. |
user_agent_type | Device type and browser | desktop , tablet , etc. |
output_format | The format of the output | links , md , html |
{
"mcpServers": {
"oxylabs_scraper_uvx": {
"command": "uvx",
"args": ["oxylabs-mcp"],
"env": {
"OXYLABS_USERNAME": "YOUR_USERNAME",
"OXYLABS_PASSWORD": "YOUR_PASSWORD"
}
}
}
}
{
"mcpServers": {
"oxylabs-mcp": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@oxylabs/oxylabs-mcp",
"--config",
"\"{\\\"oxylabsUsername\\\":\\\"OXYLABS_USERNAME\\\",\\\"oxylabsPassword\\\":\\\"OXYLABS_PASSWORD\\\"}\""
]
}
}
}
{
"mcpServers": {
"oxylabs_scraper": {
"command": "uv",
"args": [
"--directory",
"/<Absolute-path-to-folder>/oxylabs-mcp",
"run",
"oxylabs-mcp"
],
"env": {
"OXYLABS_USERNAME": "OXYLABS_USERNAME",
"OXYLABS_PASSWORD": "OXYLABS_PASSWORD"
}
}
}
}
Navigate to Claude → Settings → Developer → Edit Config and add one of the configurations above to the claude_desktop_config.json
file.
Navigate to Cursor → Settings → Cursor Settings → MCP. Click Add new global MCP server and add one of the configurations above.
This server provides two main tools:
Distributed under the MIT License – see LICENSE
for details.
Established in 2015, Oxylabs is a market-leading web intelligence collection platform, driven by the highest business, ethics, and compliance standards, enabling companies worldwide to unlock data-driven insights.
Web content fetching and conversion for efficient LLM usage
Browser automation and web scraping
Enable AI agents to get structured data from unstructured web with AgentQL.
Use 3,000+ pre-built cloud tools to extract data from websites, e-commerce, social media, search engines, maps, and more
Discover, extract, and interact with the web - one interface powering automated access across the public internet.
Automate browser interactions in the cloud (e.g. web navigation, data extraction, form filling, and more)
Extract web data with Firecrawl
Hyperbrowser is the next-generation platform empowering AI agents and enabling effortless, scalable browser automation.
Leverage Notte Web AI agents & cloud browser sessions for scalable browser automation & scraping workflows
Turn websites into datasets with Scrapezy