Sitemap MCP Server
A server for fetching, parsing, analyzing, and visualizing website sitemaps.
Sitemap MCP Server
Discover website architecture and analyze site structure by fetching, parsing, and visualizing sitemaps from any URL. Uncover hidden pages and extract organized hierarchies without manual exploration.
Includes ready-to-use prompt templates for Claude Desktop that let you analyze websites, check sitemap health, extract URLs, find missing content, and create visualizations with just a URL input.
Demo
Get answers to questions about any website leveraging the power of sitemaps.
Cursor: how many pages does a modelcontextprotocol.io have?
Claude + prompt: visualize the sitemap in a diagram of windsurf.com
Click on the "attach" button next to the tools button:
Then select visualize_sitemap:
Now we enter windsurf.com:
And we get a visualization of teh sitemap:
Installation
Make sure uv is installed.
Installing in Claude Desktop, Cursor or Windsurf
Add this entry to your claude_desktop_config.json, Cursor settings, etc.:
{
"mcpServers": {
"sitemap": {
"command": "uvx",
"args": ["sitemap-mcp-server"],
"env": { "TRANSPORT": "stdio" }
}
}
}
Restart Claude if it's running. For Cursor simply press refresh and/or enable the MCP Server in the settings.
Installing via Smithery
To install sitemap for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mugoosse/sitemap --client claude
MCP Inspector
uv + stdio transport
npx @modelcontextprotocol/inspector env TRANSPORT=stdio uvx sitemap-mcp-server
Open the MCP Inspector at http://127.0.0.1:6274, select stdio transport, and connect to the MCP server.
uv + sse transport
# Start the server
uvx sitemap-mcp-server
# Start the MCP Inspector in a separate terminal
npx @modelcontextprotocol/inspector connect http://127.0.0.1:8050
Open the MCP Inspector at http://127.0.0.1:6274, select sse transport, and connect to the MCP server.
SSE Transport
If you want to use the SSE transport, follow these steps:
- Start the server:
uvx sitemap-mcp-server
- Configure your MCP Client, e.g. Cursor:
{
"mcpServers": {
"sitemap": {
"transport": "sse",
"url": "http://localhost:8050/sse"
}
}
}
Local Development
For instructions on building and running the project from source, please refer to the DEVELOPERS.md guide.
Usage
Tools
The following tools are available via the MCP server:
-
get_sitemap_tree - Fetch and parse the sitemap tree from a website URL
- Arguments:
url(website URL),include_pages(optional, boolean) - Returns: JSON representation of the sitemap tree structure
- Arguments:
-
get_sitemap_pages - Get all pages from a website's sitemap with filtering options
- Arguments:
url(website URL),limit(optional),include_metadata(optional),route(optional),sitemap_url(optional),cursor(optional) - Returns: JSON list of pages with pagination metadata
- Arguments:
-
get_sitemap_stats - Get statistics about a website's sitemap
- Arguments:
url(website URL) - Returns: JSON object with sitemap statistics including page counts, modification dates, and subsitemap details
- Arguments:
-
parse_sitemap_content - Parse a sitemap directly from its XML or text content
- Arguments:
content(sitemap XML content),include_pages(optional, boolean) - Returns: JSON representation of the parsed sitemap
- Arguments:
Prompts
The server includes ready-to-use prompts that appear as templates in Claude Desktop. After installing the server, you'll see these templates in the "Templates" menu (click the + icon next to the message input):
- Analyze Sitemap: Provides comprehensive structure analysis of a website's sitemap
- Check Sitemap Health: Evaluates SEO and health metrics of a sitemap
- Extract URLs from Sitemap: Extracts and filters specific URLs from a sitemap
- Find Missing Content in Sitemap: Identifies content gaps in a website's sitemap
- Visualize Sitemap Structure: Creates a Mermaid.js diagram visualization of sitemap structure
To use these prompts:
- Click the + icon next to the message input in Claude Desktop
- Select the desired template from the list
- Fill in the website URL when prompted
- Claude will execute the appropriate sitemap analysis
Examples
Fetch a Complete Sitemap
{
"name": "get_sitemap_tree",
"arguments": {
"url": "https://example.com",
"include_pages": true
}
}
Get Pages with Filtering and Pagination
Filter by Route
{
"name": "get_sitemap_pages",
"arguments": {
"url": "https://example.com",
"limit": 100,
"include_metadata": true,
"route": "/blog/"
}
}
Filter by Specific Subsitemap
{
"name": "get_sitemap_pages",
"arguments": {
"url": "https://example.com",
"limit": 100,
"include_metadata": true,
"sitemap_url": "https://example.com/blog-sitemap.xml"
}
}
Cursor-Based Pagination
The server implements MCP cursor-based pagination to handle large sitemaps efficiently:
Initial Request:
{
"name": "get_sitemap_pages",
"arguments": {
"url": "https://example.com",
"limit": 50
}
}
Response with Pagination:
{
"base_url": "https://example.com",
"pages": [...], // First batch of pages
"limit": 50,
"nextCursor": "eyJwYWdlIjoxfQ=="
}
Subsequent Request with Cursor:
{
"name": "get_sitemap_pages",
"arguments": {
"url": "https://example.com",
"limit": 50,
"cursor": "eyJwYWdlIjoxfQ=="
}
}
When there are no more results, the nextCursor field will be absent from the response.
Get Sitemap Statistics
{
"name": "get_sitemap_stats",
"arguments": {
"url": "https://example.com"
}
}
The response includes both total statistics and detailed stats for each subsitemap:
{
"total": {
"url": "https://example.com",
"page_count": 150,
"sitemap_count": 3,
"sitemap_types": ["WebsiteSitemap", "NewsSitemap"],
"priority_stats": {
"min": 0.1,
"max": 1.0,
"avg": 0.65
},
"last_modified_count": 120
},
"subsitemaps": [
{
"url": "https://example.com/sitemap.xml",
"type": "WebsiteSitemap",
"page_count": 100,
"priority_stats": {
"min": 0.3,
"max": 1.0,
"avg": 0.7
},
"last_modified_count": 80
},
{
"url": "https://example.com/blog/sitemap.xml",
"type": "WebsiteSitemap",
"page_count": 50,
"priority_stats": {
"min": 0.1,
"max": 0.9,
"avg": 0.5
},
"last_modified_count": 40
}
]
}
This allows MCP clients to understand which subsitemaps might be of interest for further investigation. You can then use the sitemap_url parameter in get_sitemap_pages to filter pages from a specific subsitemap.
Parse Sitemap Content Directly
{
"name": "parse_sitemap_content",
"arguments": {
"content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?><urlset xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\"><url><loc>https://example.com/</loc></url></urlset>",
"include_pages": true
}
}
Acknowledgements
- This MCP Server leverages the ultimate-sitemap-parser library
- Built using the Model Context Protocol Python SDK
License
This project is licensed under the MIT License. See the LICENSE file for details.
関連サーバー
Bright Data
スポンサーDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
Airbnb MCP Server
Search for Airbnb listings and retrieve detailed information without an API key.
Driflyte
The Driflyte MCP Server exposes tools that allow AI assistants to query and retrieve topic-specific knowledge from recursively crawled and indexed web pages.
Xpoz MCP
Social Media Intelligence for AI Agents
Conduit
Headless browser with SHA-256 hash-chained audit trails and Ed25519-signed proof bundles. MCP server for AI agents.
Firecrawl MCP
Adds powerful web scraping and search capabilities to LLM clients like Cursor and Claude.
Oxylabs
Scrape websites with Oxylabs Web API, supporting dynamic rendering and parsing for structured data extraction.
Playwright MCP Server
An MCP server using Playwright for browser automation and webscrapping
Puppeteer
Browser automation using Puppeteer, with support for local, Docker, and Cloudflare Workers deployments.
SnapSender
Capture any website as PNG, JPEG, WebP, or PDF with a single tool call.
MCP Go Colly Crawler
A web crawling framework that integrates the Model Context Protocol (MCP) with the Colly web scraping library.