Execute XPath queries on XML content.
MCP Server for executing XPath queries on XML content.
xpath
xml
(string): The XML content to queryquery
(string): The XPath query to executemimeType
(optional, string): The MIME type (e.g. text/xml, application/xml, text/html, application/xhtml+xml)xpathwithurl
url
(string): The URL to fetch XML/HTML content fromquery
(string): The XPath query to executemimeType
(optional, string): The MIME type (e.g. text/xml, application/xml, text/html, application/xhtml+xml)To install mcp-xpath for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @thirdstrandstudio/mcp-xpath --client claude
# Install dependencies
npm install
# Build the package
npm run build
Add the following to your claude_desktop_config.json
:
{
"mcpServers": {
"xpath": {
"command": "npx",
"args": [
"@thirdstrandstudio/mcp-xpath"
]
}
}
}
{
"mcpServers": {
"xpath": {
"command": "node",
"args": [
"/path/to/mcp-xpath/dist/index.js"
]
}
}
}
Replace /path/to/mcp-xpath
with the actual path to your repository.
// Select all <item> elements from XML
const result = await callTool("xpath", {
xml: "<root><item>value1</item><item>value2</item></root>",
query: "//item/text()",
mimeType: "text/xml"
});
// Get all links from HTML
const result = await callTool("xpath", {
xml: "<html><body><a href='link1.html'>Link 1</a><a href='link2.html'>Link 2</a></body></html>",
query: "//a/@href",
mimeType: "text/html"
});
// Get all links from a webpage
const result = await callTool("xpathwithurl", {
url: "https://example.com",
query: "//a/@href",
mimeType: "text/html"
});
# Install dependencies
npm install
# Start the server in development mode
npm start
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
Access Outscraper's data extraction services for business intelligence, location data, reviews, and contact information from various online platforms.
Attaches to existing browser sessions using the Chrome DevTools Protocol for automation and interaction.
Integrate real-time Scrapeless Google SERP(Google Search, Google Flight, Google Map, Google Jobs....) results into your LLM applications. This server enables dynamic context retrieval for AI workflows, chatbots, and research tools.
Scrape Weibo user information, feeds, and perform searches.
Hyperbrowser is the next-generation platform empowering AI agents and enabling effortless, scalable browser automation.
Turn websites into datasets with Scrapezy
Web content fetching and conversion for efficient LLM usage
Fetches content from deepwiki.com and converts it into LLM-readable markdown.
Playwright MCP server
Leverage Notte Web AI agents & cloud browser sessions for scalable browser automation & scraping workflows