Novada-MCP

Search, extract, crawl, map, and research the web — from any AI agent or terminal.

Novada MCP Server

Search, extract, crawl, map, and research the web — from any AI agent or terminal.

Powered by novada.com — 100M+ proxy IPs across 195 countries.

novada.com npm version LobeHub MCP Smithery 5 tools 5 engines CLI nova 100M+ proxy IPs 195 countries 117 tests MIT License

downloads stars

MCP Badge

---

Language / 语言: English · 中文


English

Jump to: Quick Start · Tools · Prompts · Resources · Examples · Use Cases · Comparison


nova — CLI

npm install -g novada-mcp
export NOVADA_API_KEY=your-key    # Free at novada.com
nova search "best desserts in Düsseldorf" --country de
nova search "AI funding news" --time week --include "techcrunch.com,wired.com"
nova extract https://example.com
nova crawl https://docs.example.com --max-pages 10 --select "/api/.*"
nova map https://docs.example.com --search "webhook" --max-depth 3
nova research "How do AI agents use web scraping?" --depth deep --focus "production use cases"

Real Output Examples

nova search "best desserts in Düsseldorf" --country de

## Search Results
results:3 | engine:google | country:de

---

### 1. THE BEST Dessert in Düsseldorf
url: https://www.tripadvisor.com/Restaurants-g187373-zfg9909-Dusseldorf...
snippet: Heinemann Konditorei Confiserie (4.4★), Eis-Café Pia (4.5★), Cafe Huftgold (4.3★)

### 2. Top 10 Best Desserts Near Dusseldorf
url: https://www.yelp.com/search?cflt=desserts&find_loc=Dusseldorf...
snippet: Namu Café, Pure Pastry, Tenten Coffee, Eiscafé Pia...

### 3. Good Dessert Spots : r/duesseldorf
url: https://www.reddit.com/r/duesseldorf/comments/1mxh4bj/...
snippet: "I'm moving to Düsseldorf soon and I love trying out desserts!"

---
## Agent Hints
- To read any result in full: `novada_extract` with its url
- To batch-read multiple results: `novada_extract` with `url=[url1, url2, ...]`
- For deeper multi-source research: `novada_research`

nova research "How do AI agents use web scraping?" --depth deep

## Research Report
question: "How do AI agents use web scraping?"
depth:deep (auto-selected) | searches:6 | results:28 | unique_sources:15

---

## Search Queries Used
1. How do AI agents use web scraping?
2. ai agents web scraping overview explained
3. ai agents web scraping vs alternatives comparison
4. ai agents web scraping best practices real world
5. ai agents web scraping challenges limitations
6. "ai" "agents" site:reddit.com OR site:news.ycombinator.com

## Key Findings
1. **How AI Agents Are Changing the Future of Web Scraping**
   https://medium.com/@davidfagb/...
   These agents can think, understand, and adjust to changes in web structure...

## Sources
1. [How AI Agents Are Changing Web Scraping](https://medium.com/...)

---
## Agent Hints
- 15 sources found. Extract the most relevant with: `novada_extract` with url=[url1, url2]
- For more coverage: use depth='comprehensive' (8-10 searches).

Map → Batch Extract Workflow

# Step 1: Discover pages
nova map https://docs.example.com --search "webhook" --max-depth 3

# Step 2: Batch-extract multiple pages in one call
nova extract https://docs.example.com/webhooks/events https://docs.example.com/webhooks/retry

Quick Start

Claude Code (1 command)

claude mcp add novada -e NOVADA_API_KEY=your-key -- npx -y novada-mcp

--scope user for all projects:

claude mcp add --scope user novada -e NOVADA_API_KEY=your-key -- npx -y novada-mcp

Smithery (1 click)

Install via Smithery — supports Claude Desktop, Cursor, VS Code, and more.

npx -y @smithery/cli install novada-mcp --client claude
Cursor / VS Code / Windsurf / Claude Desktop

Cursor.cursor/mcp.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "your-key" } } } }

VS Code.vscode/mcp.json:

{ "servers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "your-key" } } } }

Windsurf~/.codeium/windsurf/mcp_config.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "your-key" } } } }

Claude Desktop~/Library/Application Support/Claude/claude_desktop_config.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "your-key" } } } }
Python (via CLI)
import subprocess, os

result = subprocess.run(
    ["nova", "search", "AI agent frameworks"],
    capture_output=True, text=True,
    env={**os.environ, "NOVADA_API_KEY": "your-key"}
)
print(result.stdout)

Tools

novada_search

Search the web via Google, Bing, or 3 other engines. Returns structured results with titles, URLs, and snippets.

ParameterTypeRequiredDefaultDescription
querystringYesSearch query
enginestringNo"google"google bing duckduckgo yahoo yandex
numnumberNo10Results count (1-20)
countrystringNoCountry code (us, uk, de)
languagestringNoLanguage code (en, zh, de)
time_rangestringNoday week month year
start_datestringNoStart date YYYY-MM-DD
end_datestringNoEnd date YYYY-MM-DD
include_domainsstring[]NoOnly return results from these domains
exclude_domainsstring[]NoExclude results from these domains

novada_extract

Extract the main content from any URL. Supports batch extraction of multiple URLs in parallel.

ParameterTypeRequiredDefaultDescription
urlstring | string[]YesURL or array of URLs (max 10 for batch)
formatstringNo"markdown"markdown text html
querystringNoQuery context hint for agent-side filtering

novada_crawl

Crawl a website and extract content from multiple pages concurrently.

ParameterTypeRequiredDefaultDescription
urlstringYesSeed URL
max_pagesnumberNo5Max pages (1-20)
strategystringNo"bfs"bfs (breadth-first) or dfs (depth-first)
select_pathsstring[]NoRegex patterns — only crawl matching paths
exclude_pathsstring[]NoRegex patterns — skip matching paths
instructionsstringNoNatural-language hint for agent-side filtering

novada_map

Discover all URLs on a website. Fast — collects links without extracting content.

ParameterTypeRequiredDefaultDescription
urlstringYesRoot URL
searchstringNoFilter URLs by search term
limitnumberNo50Max URLs (1-100)
max_depthnumberNo2BFS depth limit (1-5)
include_subdomainsbooleanNofalseInclude subdomain URLs

novada_research

Multi-step web research. Runs 3-10 parallel searches, deduplicates, returns a cited report.

ParameterTypeRequiredDefaultDescription
questionstringYesResearch question (min 5 chars)
depthstringNo"auto"auto quick deep comprehensive
focusstringNoNarrow sub-query focus (e.g. "production use cases")

Prompts

MCP prompts are pre-built workflow templates visible in supported clients (Claude Desktop, LobeChat, etc.).

PromptDescriptionArguments
research_topicDeep multi-source research with optional country and focustopic (required), country, focus
extract_and_summarizeExtract one or more URLs and summarizeurls (required), focus
site_auditMap site structure then extract key sectionsurl (required), sections

Resources

Read-only data agents can access before deciding which tool to call.

URIDescription
novada://enginesAll 5 engines with characteristics and use cases
novada://countries195 country codes for geo-targeted search
novada://guideDecision tree for choosing between tools

Use Cases

Use CaseToolsHow It Works
RAG pipelinesearch + extractSearch → batch-extract full text → vector DB
Agentic researchresearchOne call → multi-source report with citations
Real-time groundingsearchFacts beyond training cutoff
Competitive intelcrawlCrawl competitor sites → extract changes
Lead generationsearchStructured company/product lists
SEO trackingsearchKeywords across 5 engines, 195 countries
Site auditmapextractDiscover pages, then batch-extract targets
Domain filteringsearchinclude_domains to restrict to trusted sources
Trend monitoringsearchtime_range=week for recent-only results

Why Novada?

FeatureNovadaTavilyFirecrawlBrave Search
Web search5 engines1 engine1 engine1 engine
URL extractionYesYesYesNo
Batch extractionYes (10 URLs)NoYesNo
Website crawlingBFS/DFSYesYes (async)No
URL mappingYesYesYesNo
ResearchYesYesNoNo
MCP Prompts3NoNoNo
MCP Resources3NoNoNo
Geo-targeting195 countriesCountry paramNoCountry param
Domain filteringinclude/excludeNoNoNo
Anti-botProxy (100M+ IPs)NoHeadless ChromeNo
CLInova commandNoNoNo

Prerequisites


中文文档

跳转至: 快速开始 · 工具 · Prompts · Resources · 示例 · 用例 · 对比


简介

Novada MCP Server 让 AI 代理实时访问互联网 — 搜索、提取、爬取、映射和研究网络内容。所有请求通过 Novada 的代理基础设施(1亿+ IP,195 个国家,反机器人绕过)路由。


快速开始

npm install -g novada-mcp
export NOVADA_API_KEY=你的密钥    # 在 novada.com 免费获取
nova search "杜塞尔多夫最好的甜点" --country de
nova search "AI 融资新闻" --time week --include "techcrunch.com"
nova extract https://example.com
nova crawl https://docs.example.com --max-pages 10 --select "/api/.*"
nova map https://docs.example.com --search "api" --max-depth 3
nova research "AI 代理如何使用网络抓取?" --depth deep --focus "生产用例"

连接到 Claude Code

claude mcp add novada -e NOVADA_API_KEY=你的密钥 -- npx -y novada-mcp

所有项目生效:

claude mcp add --scope user novada -e NOVADA_API_KEY=你的密钥 -- npx -y novada-mcp

通过 Smithery 一键安装

npx -y @smithery/cli install novada-mcp --client claude
Cursor / VS Code / Windsurf / Claude Desktop

Cursor.cursor/mcp.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "你的密钥" } } } }

VS Code.vscode/mcp.json:

{ "servers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "你的密钥" } } } }

Windsurf~/.codeium/windsurf/mcp_config.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "你的密钥" } } } }

Claude Desktop~/Library/Application Support/Claude/claude_desktop_config.json:

{ "mcpServers": { "novada": { "command": "npx", "args": ["-y", "novada-mcp@latest"], "env": { "NOVADA_API_KEY": "你的密钥" } } } }

真实输出示例

nova search "杜塞尔多夫最好的甜点" --country de

## Search Results
results:3 | engine:google | country:de

---

### 1. THE BEST Dessert in Düsseldorf
url: https://www.tripadvisor.com/Restaurants-g187373-zfg9909-Dusseldorf...
snippet: Heinemann Konditorei Confiserie (4.4★), Eis-Café Pia (4.5★)

### 2. Top 10 Best Desserts Near Dusseldorf
url: https://www.yelp.com/search?cflt=desserts&find_loc=Dusseldorf...
snippet: Namu Café, Pure Pastry, Tenten Coffee...

---
## Agent Hints
- 完整阅读任一结果:使用 `novada_extract` 传入对应 url
- 批量读取多个结果:`novada_extract` 传入 `url=[url1, url2, ...]`
- 深度多源研究:使用 `novada_research`

nova research "AI 代理如何使用网络抓取?" --depth deep

## Research Report
question: "AI 代理如何使用网络抓取?"
depth:deep (auto-selected) | searches:6 | results:28 | unique_sources:15

---

## Search Queries Used
1. AI 代理如何使用网络抓取?
2. ai agents web scraping overview explained
3. ai agents web scraping best practices real world
...

## Key Findings
1. **How AI Agents Are Changing Web Scraping**
   https://medium.com/@davidfagb/...

---
## Agent Hints
- 找到 15 个来源。用 `novada_extract` 提取最相关的页面
- 更多覆盖:使用 depth='comprehensive'(8-10 次搜索)

工具

novada_search — 网络搜索

参数类型必填默认值说明
querystring搜索关键词
enginestring"google"google bing duckduckgo yahoo yandex
numnumber10结果数量(1-20)
countrystring国家代码(us cn de
languagestring语言代码(en zh de
time_rangestring时间范围:day week month year
start_datestring起始日期 YYYY-MM-DD
end_datestring截止日期 YYYY-MM-DD
include_domainsstring[]只返回这些域名的结果
exclude_domainsstring[]排除这些域名的结果

novada_extract — 内容提取

参数类型必填默认值说明
urlstring | string[]单个 URL 或 URL 数组(最多 10 个,并行处理)
formatstring"markdown"markdown text html
querystring查询上下文,帮助 agent 聚焦相关内容

novada_crawl — 网站爬取

参数类型必填默认值说明
urlstring起始 URL
max_pagesnumber5最大页面数(1-20)
strategystring"bfs"bfs(广度优先)或 dfs(深度优先)
select_pathsstring[]正则表达式 — 只爬取匹配路径
exclude_pathsstring[]正则表达式 — 跳过匹配路径
instructionsstring自然语言说明,指导 agent 侧语义过滤

novada_map — URL 发现

参数类型必填默认值说明
urlstring根 URL
searchstring按关键词过滤 URL
limitnumber50最多 URL 数(1-100)
max_depthnumber2BFS 深度上限(1-5)
include_subdomainsbooleanfalse是否包含子域名

novada_research — 深度研究

参数类型必填默认值说明
questionstring研究问题(最少 5 个字符)
depthstring"auto"auto quick deep comprehensive
focusstring聚焦方向(如 "技术实现" "市场趋势"

Prompts 预置工作流

MCP Prompts 是预置工作流模板,在支持的客户端(Claude Desktop、LobeChat 等)中可直接选用。

Prompt功能参数
research_topic对任意主题进行深度多源研究topic(必填), country, focus
extract_and_summarize提取一个或多个 URL 的内容并生成摘要urls(必填), focus
site_audit映射网站结构,然后提取并汇总关键部分url(必填), sections

Resources 只读数据

Agent 在选择工具之前可以读取的参考数据。

URI内容
novada://engines5 个搜索引擎的特性和推荐使用场景
novada://countries195 个国家代码(地理定向搜索)
novada://guide工具选择决策树和工作流模式

用例

用例工具说明
RAG 数据管道search + extract搜索 → 批量提取全文 → 向量数据库
智能研究research一次调用 → 多源综合带引用报告
实时知识search获取训练截止日期之后的事实
竞品分析crawl爬取竞品网站 → 提取内容变化
获客线索search结构化的公司/产品列表
SEO 追踪search跨 5 个引擎、195 个国家追踪关键词
网站审计mapextract发现所有页面,然后批量提取目标内容
域名过滤searchinclude_domains 只搜索可信来源
趋势监控searchtime_range=week 只获取最新结果

为什么选择 Novada?

特性NovadaTavilyFirecrawlBrave Search
搜索引擎数量5 个1 个1 个1 个
URL 内容提取支持支持支持不支持
批量提取支持(最多 10 个)不支持支持不支持
网站爬取BFS/DFS支持支持(异步)不支持
URL 发现支持支持支持不支持
深度研究支持支持不支持不支持
MCP Prompts3 个
MCP Resources3 个
地理定向195 个国家国家参数国家参数
域名过滤include/exclude
反机器人代理(1亿+ IP)无头浏览器
CLI 工具nova 命令

前置要求


About

Novada — web data infrastructure for developers and AI agents. 100M+ proxy IPs, 195 countries.

License

MIT

เซิร์ฟเวอร์ที่เกี่ยวข้อง

NotebookLM Web Importer

นำเข้าหน้าเว็บและวิดีโอ YouTube ไปยัง NotebookLM ด้วยคลิกเดียว ผู้ใช้กว่า 200,000 คนไว้วางใจ

ติดตั้งส่วนขยาย Chrome