LinkRescue
MCP server that exposes LinkRescue's broken link scanning, monitoring, and fix suggestion capabilities to AI agents (Claude, Cursor, etc.).
LinkRescue MCP Server
Find broken links fast, prioritize by impact, and generate fix suggestions your AI agent can act on.
LinkRescue MCP exposes broken-link scanning, monitoring, and remediation workflows through the Model Context Protocol (MCP), so tools like Claude and Cursor can run link-health operations directly.
What You Get
check_broken_links: scan a URL (or sitemap) and return a structured broken-link reportmonitor_links: set up recurring monitoring for a websiteget_fix_suggestions: generate prioritized remediation recommendationshealth_check: verify MCP server and backend API connectivity
If the LinkRescue backend API is unreachable, the server falls back to realistic simulated data so local testing and demos keep working.
Requirements
- Python 3.11+
pip
Quick Start
git clone https://github.com/carsonroell-debug/linkrescue-mcp.git
cd linkrescue-mcp
pip install -r requirements.txt
python main.py
MCP endpoint:
http://localhost:8000/mcp
Configuration
| Variable | Description | Default |
|---|---|---|
LINKRESCUE_API_BASE_URL | Base URL for LinkRescue API | http://localhost:3000/api/v1 |
LINKRESCUE_API_KEY | API key for authenticated requests | empty |
Example:
export LINKRESCUE_API_BASE_URL="https://your-api.example.com/api/v1"
export LINKRESCUE_API_KEY="your-api-key"
python main.py
Running Options
Run directly:
python main.py
Run via FastMCP CLI:
fastmcp run main.py --transport streamable-http --port 8000
Connect an MCP Client
Claude Desktop
Add this to claude_desktop_config.json:
{
"mcpServers": {
"linkrescue": {
"url": "http://localhost:8000/mcp"
}
}
}
Claude Code
claude mcp add linkrescue --transport http http://localhost:8000/mcp
Try It
fastmcp list-tools main.py
fastmcp call-tool main.py health_check '{}'
fastmcp call-tool main.py check_broken_links '{"url":"https://example.com"}'
Tool Inputs and Outputs
check_broken_links
Inputs:
url(required): site URL to scansitemap_url(optional): crawl from sitemapmax_depth(optional, default3): crawl depth
Returns scan metadata, broken-link details, and summary statistics.
monitor_links
Inputs:
url(required)frequency_hours(optional, default24)
Returns monitoring ID, schedule details, and status.
get_fix_suggestions
Input:
- full report from
check_broken_links, or - raw
broken_linksarray, or - JSON string of either format
Returns prioritized actions and suggested remediation steps.
health_check
No input. Returns server status and backend API reachability.
Deployment
Smithery
This repo includes smithery.yaml and smithery.json.
- Push repository to GitHub
- Create/add server in Smithery
- Point Smithery to this repository
Docker / Hosting Platforms
A Dockerfile is included for Railway, Fly.io, and other container hosts.
# Railway
railway up
# Fly.io
fly launch
fly deploy
Set LINKRESCUE_API_BASE_URL and LINKRESCUE_API_KEY in your host environment.
Architecture
Agent (Claude, Cursor, etc.)
-> MCP
LinkRescue MCP Server (this repo)
-> HTTP API
LinkRescue Backend API
This server is a translation layer between MCP tool calls and LinkRescue API operations.
Additional README Variants
- Developer-focused version:
README.dev.md - Marketplace-focused version:
README.marketplace.md
เซิร์ฟเวอร์ที่เกี่ยวข้อง
Bright Data
ผู้สนับสนุนDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
E-Commerce Intelligence MCP Server
Shopify store analysis, product catalog extraction, pricing strategy, and inventory monitoring
Oxylabs AI Studio
AI tools for web scraping, crawling, browser control, and web search via the Oxylabs AI Studio API.
SABIS MCP Server
Access academic grades from the Sakarya University SABIS system via automated web scraping.
Playwright SSE MCP Server
An MCP server that provides Playwright features for web scraping and browser automation.
Playwright
Provides browser automation capabilities using Playwright. Interact with web pages, take screenshots, and execute JavaScript in a real browser environment.
Playwright MCP
Browser automation using Playwright, enabling LLMs to interact with web pages through structured accessibility snapshots.
Browser Use
Enables AI agents to control web browsers using natural language commands.
Chrome Debug
Automate Chrome via its debugging port with session persistence. Requires Chrome to be started with remote debugging enabled.
Opengraph.io
Opengraph data, web scraping, screenshot features in a handy MCP tool
Hyperbrowser
Hyperbrowser is the next-generation platform empowering AI agents and enabling effortless, scalable browser automation.