LinkRescue
MCP server that exposes LinkRescue's broken link scanning, monitoring, and fix suggestion capabilities to AI agents (Claude, Cursor, etc.).
LinkRescue MCP Server
Find broken links fast, prioritize by impact, and generate fix suggestions your AI agent can act on.
LinkRescue MCP exposes broken-link scanning, monitoring, and remediation workflows through the Model Context Protocol (MCP), so tools like Claude and Cursor can run link-health operations directly.
What You Get
check_broken_links: scan a URL (or sitemap) and return a structured broken-link reportmonitor_links: set up recurring monitoring for a websiteget_fix_suggestions: generate prioritized remediation recommendationshealth_check: verify MCP server and backend API connectivity
If the LinkRescue backend API is unreachable, the server falls back to realistic simulated data so local testing and demos keep working.
Requirements
- Python 3.11+
pip
Quick Start
git clone https://github.com/carsonroell-debug/linkrescue-mcp.git
cd linkrescue-mcp
pip install -r requirements.txt
python main.py
MCP endpoint:
http://localhost:8000/mcp
Configuration
| Variable | Description | Default |
|---|---|---|
LINKRESCUE_API_BASE_URL | Base URL for LinkRescue API | http://localhost:3000/api/v1 |
LINKRESCUE_API_KEY | API key for authenticated requests | empty |
Example:
export LINKRESCUE_API_BASE_URL="https://your-api.example.com/api/v1"
export LINKRESCUE_API_KEY="your-api-key"
python main.py
Running Options
Run directly:
python main.py
Run via FastMCP CLI:
fastmcp run main.py --transport streamable-http --port 8000
Connect an MCP Client
Claude Desktop
Add this to claude_desktop_config.json:
{
"mcpServers": {
"linkrescue": {
"url": "http://localhost:8000/mcp"
}
}
}
Claude Code
claude mcp add linkrescue --transport http http://localhost:8000/mcp
Try It
fastmcp list-tools main.py
fastmcp call-tool main.py health_check '{}'
fastmcp call-tool main.py check_broken_links '{"url":"https://example.com"}'
Tool Inputs and Outputs
check_broken_links
Inputs:
url(required): site URL to scansitemap_url(optional): crawl from sitemapmax_depth(optional, default3): crawl depth
Returns scan metadata, broken-link details, and summary statistics.
monitor_links
Inputs:
url(required)frequency_hours(optional, default24)
Returns monitoring ID, schedule details, and status.
get_fix_suggestions
Input:
- full report from
check_broken_links, or - raw
broken_linksarray, or - JSON string of either format
Returns prioritized actions and suggested remediation steps.
health_check
No input. Returns server status and backend API reachability.
Deployment
Smithery
This repo includes smithery.yaml and smithery.json.
- Push repository to GitHub
- Create/add server in Smithery
- Point Smithery to this repository
Docker / Hosting Platforms
A Dockerfile is included for Railway, Fly.io, and other container hosts.
# Railway
railway up
# Fly.io
fly launch
fly deploy
Set LINKRESCUE_API_BASE_URL and LINKRESCUE_API_KEY in your host environment.
Architecture
Agent (Claude, Cursor, etc.)
-> MCP
LinkRescue MCP Server (this repo)
-> HTTP API
LinkRescue Backend API
This server is a translation layer between MCP tool calls and LinkRescue API operations.
Additional README Variants
- Developer-focused version:
README.dev.md - Marketplace-focused version:
README.marketplace.md
İlgili Sunucular
Bright Data
sponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Icypeas
Icypeas MCP allows agents to source leads in a gigantic +575M lead database, and enrich these leads with emails
Decodo
Easy web data access. Simplified retrieval of information from websites and online sources.
Sports Trading Card Agent
Real-time sports card pricing, market analysis, arbitrage detection, grading ROI, investment advice, and player stats (NBA/NFL/MLB). 9 tools for AI agents helping collectors and investors.
Fetch
Fetch web content as HTML, JSON, plain text, or Markdown.
Open Crawler MCP Server
A web crawler and content extractor that supports multiple output formats like text, markdown, and JSON.
MCP Chrome Server
A server for browser automation using Google Chrome, based on the MCP framework.
Fetch
Web content fetching and conversion for efficient LLM usage
Playwright Server
A server providing Playwright tools for browser automation and web scraping.
SideButton
Open-source MCP server with knowledge packs, 40+ browser tools, and YAML workflow engine for AI agents.
Open Crawler MCP Server
A web crawler and text extractor with robots.txt compliance, rate limiting, and page size protection.