Wikipedia Simple English MCP Server
Access Wikipedia content, prioritizing Simple English with a fallback to regular English.
Wikipedia Simple English MCP Server
A Model Context Protocol (MCP) server that provides access to Wikipedia content, prioritizing Simple English Wikipedia with intelligent fallback to regular English Wikipedia when content is unavailable or of poor quality.
Overview
This MCP server acts as a bridge between AI assistants and Wikipedia, specifically designed to:
- Prioritize Simple English: Uses Simple English Wikipedia first for clearer, more accessible content
- Intelligent Fallback: Automatically falls back to regular English Wikipedia when needed
- Quality Detection: Evaluates content quality and switches to English Wikipedia for stub articles
- Human-Friendly: Designed with readability and usability in mind
Features
š Search Tool
- Search across Wikipedia articles
- Returns structured metadata and ranked list of relevant page titles
- Tries Simple English first, falls back to English
- Configurable result limits
š Summary Tool
- Retrieve Wikipedia page summaries/excerpts
- Lightweight, fast responses for quick overviews
- Structured metadata with content length and language indicators
- Automatic disambiguation handling
š Content Tool
- Retrieve full Wikipedia page content
- Complete article text for detailed research
- Structured output with separate metadata and content blocks
- Quality-based language selection
Installation & Setup
Prerequisites
- Python 3.12 or higher
uvpackage manager
Quick Start
Run directly with uvx:
uvx git+https://github.com/bhubbb/mcp-se-wikipedia
Or install locally:
-
Clone or download this project
-
Install dependencies:
cd mcp-se-wikipedia uv sync -
Run the server:
uv run python main.py
Integration with AI Assistants
This MCP server is designed to work with AI assistants that support the Model Context Protocol. Configure your AI assistant to connect to this server via stdio.
Example configuration for Claude Desktop:
{
"mcpServers": {
"wikipedia-se": {
"command": "uvx",
"args": ["git+https://github.com/bhubbb/mcp-se-wikipedia"]
}
}
}
Or if using a local installation:
{
"mcpServers": {
"wikipedia-se": {
"command": "uv",
"args": ["run", "python", "/path/to/mcp-se-wikipedia/main.py"]
}
}
}
Available Tools
search
Search Wikipedia for articles with structured results.
Parameters:
query(required): Search termslimit(optional): Max results to return (1-20, default: 10)
Returns:
- Search metadata (Wikipedia version, query, results count, language code)
- List of matching article titles
Example:
{
"name": "search",
"arguments": {
"query": "solar system",
"limit": 5
}
}
summary
Get the summary/excerpt of a Wikipedia page.
Parameters:
title(required): Page title to retrieve summary forauto_suggest(optional): Auto-suggest similar titles if exact match not found (default: true)
Returns:
- Summary metadata (title, Wikipedia version, language code, URL, content length)
- Page summary text
Example:
{
"name": "summary",
"arguments": {
"title": "Earth",
"auto_suggest": true
}
}
content
Get the full content of a Wikipedia page.
Parameters:
title(required): Page title to retrieve full content forauto_suggest(optional): Auto-suggest similar titles if exact match not found (default: true)
Returns:
- Content metadata (title, Wikipedia version, language code, URL, content length)
- Complete article content
Example:
{
"name": "content",
"arguments": {
"title": "Earth",
"auto_suggest": true
}
}
How It Works
Language Priority System
- Simple English First: All requests start with Simple English Wikipedia
- Quality Check: For content/summary requests, checks if content is substantial (>500 characters)
- Automatic Fallback: Switches to English Wikipedia if:
- Page doesn't exist in Simple English
- Content is too brief (likely a stub)
- Search returns no results
Structured Output Format
All tools return structured responses with separate blocks for:
- Metadata: Wikipedia version, language codes, URLs, content statistics
- Content: Search results, summaries, or full article content
- Options: Disambiguation choices when multiple pages match
This structure makes responses both human-readable and machine-parseable.
Error Handling
- Disambiguation: When multiple pages match, returns list of options
- Page Not Found: Clear error messages with suggestions
- Network Issues: Graceful error handling with informative messages
Use Cases
For Students & Learners
- Get simplified explanations of complex topics
- Access educational content in clearer language
- Research assistance with automatic complexity adjustment
For Content Creation
- Source material for articles and explanations
- Fact-checking with reliable Wikipedia sources
- Research support with readability optimization
For Accessibility
- Easier-to-read content for various reading levels
- Simplified language for non-native speakers
- Clear, concise information delivery
Examples
Basic Search
Ask your AI assistant: "Search Wikipedia for information about photosynthesis"
The server will:
- Search Simple English Wikipedia for "photosynthesis"
- Return structured metadata (version, language code, result count)
- Provide a list of relevant articles
- Fall back to English Wikipedia if needed
Summary Retrieval
Ask your AI assistant: "Get a summary of the Moon from Wikipedia"
The server will:
- Try to fetch "Moon" summary from Simple English Wikipedia
- Return structured metadata (title, version, URL, content length)
- Provide the page summary
- Fall back to English Wikipedia if Simple English unavailable
Full Content Retrieval
Ask your AI assistant: "Get the full Wikipedia article about the Moon"
The server will:
- Try to fetch complete "Moon" article from Simple English Wikipedia
- Check if the content is substantial enough
- Return structured metadata and full article content
- Fall back to English Wikipedia if needed
Disambiguation Handling
Ask your AI assistant: "Get information about Mercury"
If multiple pages match (planet Mercury, element Mercury, etc.), the server will return:
- Structured disambiguation metadata
- A list of options to choose from
Development
Project Structure
mcp-se-wikipedia/
āāā main.py # Main MCP server implementation
āāā pyproject.toml # Project dependencies and metadata
āāā AGENT.md # This documentation
āāā .venv/ # Virtual environment (created by uv)
Key Dependencies
mcp: Model Context Protocol frameworkwikipedia: Python Wikipedia API wrapperasyncio: Async/await support for MCP
Customization
The server can be easily customized by modifying main.py:
- Quality Threshold: Change the 500-character threshold for content quality
- Search Limits: Modify default and maximum search result limits
- Language Settings: Add support for other Wikipedia languages
- Content Filtering: Add custom content processing or filtering
Testing the Server
Test the server directly:
uvx git+https://github.com/bhubbb/mcp-se-wikipedia
Or with local installation:
cd mcp-se-wikipedia
uv run python main.py
The server communicates via JSON-RPC over stdin/stdout, so you'll need an MCP client or AI assistant to interact with it properly.
Troubleshooting
Common Issues
- Import Errors: Make sure all dependencies are installed with
uv sync - Connection Issues: Check network connectivity for Wikipedia API access
- Rate Limiting: Wikipedia has rate limits; the server handles this gracefully
Debugging
Enable debug logging by modifying the logging level in main.py:
logging.basicConfig(level=logging.DEBUG)
Contributing
This is a simple, single-file implementation designed for clarity and ease of modification. Feel free to:
- Add new Wikipedia language support
- Implement additional content filtering
- Add caching for better performance
- Extend with additional Wikipedia features
License
This project uses the same license as its dependencies. The Wikipedia content accessed through this server is subject to Wikipedia's licensing terms.
Related Servers
Higress AI-Search MCP Server
Provides an AI search tool to enhance AI model responses with real-time search results from various search engines using the Higress ai-search feature.
Whois MCP
Performs WHOIS lookups to retrieve domain registration details, including owner, registrar, and expiration dates.
PubMed MCP Server
Search and download scientific articles from PubMed's E-utilities API.
Shodan MCP Server
Query internet-connected devices, services, and vulnerabilities using the Shodan API and CVE database.
Releasebot
Releasebot finds and watches release note sources from hundreds of products and companies.
USDA api
This server allow you to ask questions with way more accurate nutrition facts.
OpenStreetMap
Enhances LLMs with location-based services and geospatial data from OpenStreetMap.
Readeck MCP
An MCP server for advanced research assistance, configurable via environment variables.
Travel Planner
A server for travel planning and interacting with Google Maps services.
Spryker Search Tool
Search Spryker packages, documentation, and code within Spryker GitHub repositories using natural language.