Parallel Task MCP
Perform Deep Research and Batch Tasks
Parallel Task MCP
The Parallel Task MCP allows initiating deep research or task groups directly from your favorite LLM client. It can be a great way to get to know Parallel’s different APIs by exploring their capabilities, but can also be used as a way to easily do small experiments while developing production systems using Parallel APIs. Please read our MCP docs here for more details.
Installation
The official installation instructions can be found here.
{
"mcpServers": {
"Parallel Task MCP": {
"url": "https://task-mcp.parallel.ai/mcp"
}
}
}
Running locally
Running locally
This repo contains a proxy to the mcp which is hosted at: https://task-mcp.parallel.ai/mcp
How to run and test locally:
wrangler devnpx @modelcontextprotocol/inspector- Connect to server: http://localhost:8787/mcp
संबंधित सर्वर
Bright Data
प्रायोजकDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
ScrapeGraph AI
AI-powered web scraping using the ScrapeGraph AI API. Requires an API key.
Puppeteer Vision
Scrape webpages and convert them to markdown using Puppeteer. Features AI-driven interaction capabilities.
Automatic MCP Discovery
AI powered automation toolkit which acts as an agent that discovers MCP servers for you. Point it at GitHub/npm/configure your own discovery, let GPT or Claude analyze the API or MCP or any tool, get ready-to-ship plugin configs. Zero manual work.
URnetwork
High quality VPN and Proxy connections
Deepwiki
Fetches content from deepwiki.com and converts it into LLM-readable markdown.
Configurable Puppeteer MCP Server
A configurable MCP server for browser automation using Puppeteer.
Social & Content MCP Server
Trending content from Hacker News, Dev.to, IMDb, podcasts, and Eventbrite
DeepResearch MCP
A powerful research assistant for conducting iterative web searches, analysis, and report generation.
Google News Trends MCP
Access Google News and Google Trends data without paid APIs.
Fetch
Web content fetching and conversion for efficient LLM usage