LinkRescue
MCP server that exposes LinkRescue's broken link scanning, monitoring, and fix suggestion capabilities to AI agents (Claude, Cursor, etc.).
LinkRescue MCP Server
Find broken links fast, prioritize by impact, and generate fix suggestions your AI agent can act on.
LinkRescue MCP exposes broken-link scanning, monitoring, and remediation workflows through the Model Context Protocol (MCP), so tools like Claude and Cursor can run link-health operations directly.
What You Get
check_broken_links: scan a URL (or sitemap) and return a structured broken-link reportmonitor_links: set up recurring monitoring for a websiteget_fix_suggestions: generate prioritized remediation recommendationshealth_check: verify MCP server and backend API connectivity
If the LinkRescue backend API is unreachable, the server falls back to realistic simulated data so local testing and demos keep working.
Requirements
- Python 3.11+
pip
Quick Start
git clone https://github.com/carsonroell-debug/linkrescue-mcp.git
cd linkrescue-mcp
pip install -r requirements.txt
python main.py
MCP endpoint:
http://localhost:8000/mcp
Configuration
| Variable | Description | Default |
|---|---|---|
LINKRESCUE_API_BASE_URL | Base URL for LinkRescue API | http://localhost:3000/api/v1 |
LINKRESCUE_API_KEY | API key for authenticated requests | empty |
Example:
export LINKRESCUE_API_BASE_URL="https://your-api.example.com/api/v1"
export LINKRESCUE_API_KEY="your-api-key"
python main.py
Running Options
Run directly:
python main.py
Run via FastMCP CLI:
fastmcp run main.py --transport streamable-http --port 8000
Connect an MCP Client
Claude Desktop
Add this to claude_desktop_config.json:
{
"mcpServers": {
"linkrescue": {
"url": "http://localhost:8000/mcp"
}
}
}
Claude Code
claude mcp add linkrescue --transport http http://localhost:8000/mcp
Try It
fastmcp list-tools main.py
fastmcp call-tool main.py health_check '{}'
fastmcp call-tool main.py check_broken_links '{"url":"https://example.com"}'
Tool Inputs and Outputs
check_broken_links
Inputs:
url(required): site URL to scansitemap_url(optional): crawl from sitemapmax_depth(optional, default3): crawl depth
Returns scan metadata, broken-link details, and summary statistics.
monitor_links
Inputs:
url(required)frequency_hours(optional, default24)
Returns monitoring ID, schedule details, and status.
get_fix_suggestions
Input:
- full report from
check_broken_links, or - raw
broken_linksarray, or - JSON string of either format
Returns prioritized actions and suggested remediation steps.
health_check
No input. Returns server status and backend API reachability.
Deployment
Smithery
This repo includes smithery.yaml and smithery.json.
- Push repository to GitHub
- Create/add server in Smithery
- Point Smithery to this repository
Docker / Hosting Platforms
A Dockerfile is included for Railway, Fly.io, and other container hosts.
# Railway
railway up
# Fly.io
fly launch
fly deploy
Set LINKRESCUE_API_BASE_URL and LINKRESCUE_API_KEY in your host environment.
Architecture
Agent (Claude, Cursor, etc.)
-> MCP
LinkRescue MCP Server (this repo)
-> HTTP API
LinkRescue Backend API
This server is a translation layer between MCP tool calls and LinkRescue API operations.
Additional README Variants
- Developer-focused version:
README.dev.md - Marketplace-focused version:
README.marketplace.md
Похожие серверы
Bright Data
спонсорDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Playwright Server
Automate web browsers and perform web scraping tasks using the Playwright framework.
BrowserAct
BrowserAct MCP Server is a standardized MCP service that lets MCP clients connect to the BrowserAct platform to discover and run browser automation workflows, access results/files and related storage, and trigger real-world actions via natural language.
Feed
A server for fetching and parsing RSS, Atom, and JSON feeds.
GeekNews MCP Server
Fetches and caches daily articles from GeekNews using web scraping.
MCP Webscan Server
Fetch, analyze, and extract information from web pages.
Docs Fetch MCP Server
Fetch web page content with recursive exploration.
Web-curl
Fetch, extract, and process web and API content. Supports resource blocking, authentication, and Google Custom Search.
SearchMCP
Connect any LLM to the internet with the cheapest, most reliable, and developer-friendly search API.
ScrapeGraph AI
AI-powered web scraping using the ScrapeGraph AI API. Requires an API key.
WebScraping.AI
Interact with WebScraping.AI for web data extraction and scraping.