Execute XPath queries on XML content.
MCP Server for executing XPath queries on XML content.
xpath
xml
(string): The XML content to queryquery
(string): The XPath query to executemimeType
(optional, string): The MIME type (e.g. text/xml, application/xml, text/html, application/xhtml+xml)xpathwithurl
url
(string): The URL to fetch XML/HTML content fromquery
(string): The XPath query to executemimeType
(optional, string): The MIME type (e.g. text/xml, application/xml, text/html, application/xhtml+xml)To install mcp-xpath for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @thirdstrandstudio/mcp-xpath --client claude
# Install dependencies
npm install
# Build the package
npm run build
Add the following to your claude_desktop_config.json
:
{
"mcpServers": {
"xpath": {
"command": "npx",
"args": [
"@thirdstrandstudio/mcp-xpath"
]
}
}
}
{
"mcpServers": {
"xpath": {
"command": "node",
"args": [
"/path/to/mcp-xpath/dist/index.js"
]
}
}
}
Replace /path/to/mcp-xpath
with the actual path to your repository.
// Select all <item> elements from XML
const result = await callTool("xpath", {
xml: "<root><item>value1</item><item>value2</item></root>",
query: "//item/text()",
mimeType: "text/xml"
});
// Get all links from HTML
const result = await callTool("xpath", {
xml: "<html><body><a href='link1.html'>Link 1</a><a href='link2.html'>Link 2</a></body></html>",
query: "//a/@href",
mimeType: "text/html"
});
// Get all links from a webpage
const result = await callTool("xpathwithurl", {
url: "https://example.com",
query: "//a/@href",
mimeType: "text/html"
});
# Install dependencies
npm install
# Start the server in development mode
npm start
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
Fetch YouTube subtitles
Extracts information from YouTube videos and channels using the YouTube Data API.
Enable AI agents to get structured data from unstructured web with AgentQL.
A server for browser automation using Playwright, providing powerful tools for web scraping, testing, and automation.
Secure fetch to prevent access to local resources
Extracts web content using the Jina.ai Reader API.
Explore and analyze websites that have implemented the llms.txt standard.
Turn websites into datasets with Scrapezy
Dynamically scan and analyze potentially malicious URLs using the urlDNA.io
A server for web crawling and content extraction using the Crawl4AI library.