Novada-MCP

Search, extract, crawl, map, and research the web — from any AI agent or terminal.

Novada MCP Server

Search, extract, crawl, map, and research the web — from any AI agent or terminal.

Powered by novada.com — 100M+ proxy IPs across 195 countries.

novada.com npm version 5 tools 5 engines CLI nova 100M+ proxy IPs 195 countries 117 tests MIT License

downloads stars

Quick Start · Tools · Examples · Use Cases · Comparison · CLI · 中文


nova — Try It in 10 Seconds

npm install -g novada-mcp
export NOVADA_API_KEY=your-key    # Free at novada.com

nova search nova extract nova crawl nova map nova research

nova search "best desserts in Düsseldorf" --country de
nova extract https://example.com
nova map https://docs.example.com --search "api"
nova research "How do AI agents use web scraping?" --depth deep

Real Output Examples

nova search "best desserts in Düsseldorf" --country de

[Results: 4 | Engine: google | Country: de | Via: Novada proxy]

1. **THE BEST Dessert in Düsseldorf**
   URL: https://www.tripadvisor.com/Restaurants-g187373-zfg9909-Dusseldorf...
   Dessert in Düsseldorf:
   1. Heinemann Konditorei Confiserie (4.4★, 298 reviews)
   2. Eis-Café Pia (4.5★, 182 reviews)
   3. Cafe Huftgold (4.3★)

2. **Top 10 Best Desserts Near Dusseldorf**
   URL: https://www.yelp.com/search?cflt=desserts&find_loc=Dusseldorf...
   1. Namu Café  2. Pure Pastry  3. Tenten Coffee
   4. Eiscafé Pia  5. Pure ...

3. **Good Dessert Spots : r/duesseldorf**
   URL: https://www.reddit.com/r/duesseldorf/comments/1mxh4bj/...
   "I'm moving to Düsseldorf soon and I love trying out desserts!
    Do you guys know any good spots/cafes?"

Your agent can then extract any URL for full details, or research deeper:

nova extract https://www.tripadvisor.com/Restaurants-g187373-zfg9909-Dusseldorf...
nova research "best German pastries and cafes in Düsseldorf NRW" --depth deep

nova research "How do AI agents use web scraping?" --depth deep

# Research Report: How do AI agents use web scraping?

**Depth:** deep | **Searches:** 6 | **Results found:** 23 | **Unique sources:** 15

## Key Findings
1. **How AI Agents Are Changing the Future of Web Scraping**
   https://medium.com/@davidfagb/...
   These agents can think, understand, and adjust...

2. **Scaling Web Scraping with Data Streaming, Agentic AI**
   https://www.confluent.io/blog/real-time-web-scraping/
   AI Agents iteratively create code, crawl, and scrape at scale...

## Sources
1. [How AI Agents Are Changing Web Scraping](https://medium.com/...)
2. [Scaling Web Scraping with Agentic AI](https://www.confluent.io/...)

Map → Extract Workflow

# Step 1: Discover pages
nova map https://docs.example.com --search "webhook"

# Step 2: Extract what you need
nova extract https://docs.example.com/webhooks/events

Quick Start

Claude Code (1 command)

claude mcp add novada -e NOVADA_API_KEY=your-key -- npx -y novada-mcp

--scope user for all projects: claude mcp add --scope user novada -e NOVADA_API_KEY=your-key -- npx -y novada-mcp

Cursor / VS Code / Windsurf / Claude Desktop

Cursor.cursor/mcp.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "your-key" } } } }

VS Code.vscode/mcp.json:

{ "servers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "your-key" } } } }

Windsurf~/.codeium/windsurf/mcp_config.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "your-key" } } } }

Claude Desktop~/Library/Application Support/Claude/claude_desktop_config.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "your-key" } } } }
Python (via CLI)
import subprocess, os

result = subprocess.run(
    ["nova", "search", "AI agent frameworks"],
    capture_output=True, text=True,
    env={**os.environ, "NOVADA_API_KEY": "your-key"}
)
print(result.stdout)

Tools

novada_search

Search the web via Google, Bing, or 3 other engines. Returns structured results with titles, URLs, and snippets.

ParameterTypeRequiredDefaultDescription
querystringYesSearch query
enginestringNo"google"google bing duckduckgo yahoo yandex
numnumberNo10Results count (1-20)
countrystringNoCountry code (us, uk, de)
languagestringNoLanguage code (en, zh, de)

novada_extract

Extract the main content from any URL. Returns title, description, body text, and links.

ParameterTypeRequiredDefaultDescription
urlstringYesURL to extract
formatstringNo"markdown"markdown text html

novada_crawl

Crawl a website and extract content from multiple pages concurrently.

ParameterTypeRequiredDefaultDescription
urlstringYesSeed URL
max_pagesnumberNo5Max pages (1-20)
strategystringNo"bfs"bfs (breadth-first) or dfs (depth-first)

novada_map

Discover all URLs on a website. Fast — collects links without extracting content.

ParameterTypeRequiredDefaultDescription
urlstringYesRoot URL
searchstringNoFilter URLs by search term
limitnumberNo50Max URLs (1-100)
include_subdomainsbooleanNofalseInclude subdomain URLs

novada_research

Multi-step web research. Runs 3-6 parallel searches, deduplicates, returns a cited report.

ParameterTypeRequiredDefaultDescription
questionstringYesResearch question (min 5 chars)
depthstringNo"quick"quick (3 searches) or deep (5-6)

Use Cases

Use CaseToolsHow It Works
RAG pipelinesearch + extractSearch → extract full text → vector DB
Agentic researchresearchOne call → multi-source report with citations
Real-time groundingsearchFacts beyond training cutoff
Competitive intelcrawl + extractCrawl competitor sites → extract changes
Lead generationsearchStructured company/product lists
SEO trackingsearchKeywords across 5 engines, 195 countries
Site auditmapDiscover all pages before extracting
Fact-checkingsearchClaim → evidence search → verdict

Why Novada?

FeatureNovadaTavilyFirecrawlBrave Search
Web search5 engines1 engine1 engine1 engine
URL extractionYesYesYesNo
Website crawlingBFS/DFSYesYes (async)No
URL mappingYesYesYesNo
ResearchYesYesNoNo
Geo-targeting195 countriesCountry paramNoCountry param
Anti-botProxy (100M+ IPs)NoBrowser (headless Chrome)No
CLInova commandNoNoNo

Prerequisites


中文文档

点击展开完整中文文档

简介

Novada MCP Server 是一个模型上下文协议 (MCP) 服务器,让 AI 代理实时访问互联网 — 搜索、提取、爬取、映射和研究网络内容。所有请求通过 Novada 的代理基础设施(1亿+ IP,195 个国家,反机器人绕过)路由。

快速开始

npm install -g novada-mcp
export NOVADA_API_KEY=你的密钥    # 在 novada.com 免费获取

nova search "杜塞尔多夫最好的甜点" --country de
nova extract https://example.com
nova map https://docs.example.com --search "api"
nova research "AI 代理如何使用网络抓取?" --depth deep

连接到 Claude Code

claude mcp add novada -e NOVADA_API_KEY=你的密钥 -- npx -y novada-mcp

工具

工具功能参数
novada_search通过 5 个搜索引擎搜索网络query (必填), engine, num, country, language
novada_extract从任意 URL 提取主要内容url (必填), format
novada_crawl爬取网站并发提取多页内容url (必填), max_pages, strategy
novada_map发现网站所有 URL(不提取内容)url (必填), search, limit
novada_research多步骤研究,返回带引用的报告question (必填), depth

用例

用例工具说明
RAG 数据管道search + extract搜索 → 提取全文 → 向量数据库
智能研究research一次调用 → 多源综合报告
实时知识search获取训练截止日期之后的事实
竞品分析crawl + extract爬取竞品网站 → 提取变化
获客线索search结构化的公司/产品列表
SEO 追踪search跨 5 个引擎、195 个国家追踪关键词

为什么选择 Novada?

特性NovadaTavilyFirecrawl
搜索引擎5 个1 个1 个
地理定向195 个国家国家参数
反机器人代理 (1亿+ IP)浏览器
CLI 工具nova 命令

前置要求


About

Novada — web data infrastructure for developers and AI agents. 100M+ proxy IPs, 195 countries.

License

MIT

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension