Google Scholar MCP
An MCP server for searching Google Scholar, built for AI assistants and automation workflows that need papers, authors, citations, and BibTeX entries.
google-scholar-search-mcp
An MCP (Model Context Protocol) server for searching Google Scholar, built for AI assistants and automation workflows that need papers, authors, citations, and BibTeX entries.
Table of Contents
Features
- Paper Search: Query Google Scholar by keyword with filtering, sorting, and pagination
- Author Lookup: Find researcher profiles with publication lists and h-index metrics
- Citation Tracking: Retrieve papers that cite a given work
- Paper Details: Get full metadata, citations-per-year graphs, and public access info
- BibTeX Export: Generate citation entries in BibTeX format
- Bulk Search: Batch search multiple queries with automatic rate limiting
- Rate Limiting: Built-in delays between requests to avoid being blocked
- Proxy Support: Optional proxy configuration (free, single, or ScraperAPI)
Installation
Requirements
Python 3.11or later- Dependencies:
mcp[cli]>=1.4.0,scholarly>=1.7.11,pydantic>=2.0(see pyproject.toml)- project uses
uvfor dependency management
- project uses
Install it from PyPI
pip install google-scholar-search-mcp
Build from Source
git clone https://github.com/LWaetzig/google-scholar-mcp.git
cd google-scholar-mcp
pip install -e .
Note: This server uses the scholarly library to access Google Scholar. Respect Google's Terms of Service and use rate limiting appropriately to avoid being blocked.
Configuration
Configure the MCP server via environment variables:
| Variable | Default | Description |
|---|---|---|
GS_MIN_DELAY | 5.0 | Minimum seconds between requests |
GS_MAX_DELAY | 15.0 | Maximum seconds between requests |
GS_MAX_RETRIES | 3 | Number of retries on failure |
GS_PROXY_TYPE | none | Proxy mode: none, free, single, scraperapi |
GS_PROXY_HTTP | — | HTTP proxy URL (for single mode) |
GS_PROXY_HTTPS | — | HTTPS proxy URL (for single mode) |
GS_SCRAPERAPI_KEY | — | ScraperAPI key (for scraperapi mode) |
GS_TIMEOUT | 30 | Request timeout in seconds |
Proxy Configuration Examples
No Proxy (Default)
export GS_PROXY_TYPE=none
Free Proxy
export GS_PROXY_TYPE=free
Single Proxy
export GS_PROXY_TYPE=single
export GS_PROXY_HTTP=http://proxy.example.com:8080
export GS_PROXY_HTTPS=https://proxy.example.com:8080
ScraperAPI
export GS_PROXY_TYPE=scraperapi
export GS_SCRAPERAPI_KEY=your_key_here
Usage
Detailed documentation about single tools can be found here
Integration with Claude Desktop
Add the server to your Claude Desktop configuration:
| Platform | Path |
|---|---|
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
Add the google_scholar_mcp entry under mcpServers, replacing the path with the absolute path to your clone:
{
"mcpServers": {
"google-scholar": {
"command": "python",
"args": ["-m", "google_scholar_mcp.server"],
"env": {
"GS_MIN_DELAY": "5.0",
"GS_MAX_DELAY": "15.0",
"GS_PROXY_TYPE": "none"
}
}
}
}
After updating the config, restart Claude Desktop. The Google Scholar tools will appear in the MCP Tools panel.
Integration with Other MCP Clients
Any MCP client (e.g., Cline, Continue, or custom tools) can use this server. Configure the connection to:
Command: python -m google_scholar_mcp.server
Transport: stdio
Rate Limiting
The server automatically enforces rate limiting between requests to avoid overloading Google Scholar's servers:
- Min Delay (default 5s): Minimum wait between consecutive requests
- Max Delay (default 15s): Maximum wait (randomized to avoid patterns)
- Max Retries (default 3): Retry failed requests up to this many times
These settings help prevent being blocked by Google Scholar. Adjust via environment variables if needed:
export GS_MIN_DELAY=3.0
export GS_MAX_DELAY=10.0
export GS_MAX_RETRIES=5
⚠️ IP Blocking Warning
If you exceed Google Scholar's rate limits despite the rate limiter:
- Your IP may be temporarily blocked (usually 24-48 hours)
- All requests will fail with connection errors or 429 responses
- Blocked IPs cannot make requests even with valid proxies on the same IP range
- Repeated violations may trigger permanent blocks or require CAPTCHA solving
Recommended Practices:
- Never decrease delays below 5 seconds — the defaults are tuned for reliability
- Use the bulk_search tool instead of rapid sequential searches — it includes built-in delays
- Add extra buffer during bulk operations — consider setting
GS_MIN_DELAY=10.0for large jobs - Use a proxy service (free proxy or ScraperAPI) to distribute requests across multiple IPs
- Monitor for 429 errors — if you see them, increase delays immediately and wait before retrying
- Spread requests over time — don't run 100 queries in 5 minutes, even with delays
Recovery from IP Blocks
If your IP gets blocked:
- Wait 24-48 hours for the temporary block to expire
- Use a proxy — enable
GS_PROXY_TYPE=freeorscraperapito route through different IPs - Change your network — use a different WiFi/ISP temporarily if possible
- Contact support — for persistent blocks, escalate to Google Scholar support
Choosing Appropriate Delays
| Scenario | GS_MIN_DELAY | GS_MAX_DELAY | Notes |
|---|---|---|---|
| Single searches | 5.0 | 15.0 | Default; safe for occasional queries |
| Bulk operations | 10.0 | 20.0 | Use for batch jobs; prevents rapid-fire requests |
| Heavy load | 15.0 | 30.0 | Use with proxy for large-scale research |
| Aggressive ⚠️ | <5.0 | <10.0 | Not recommended; high risk of IP blocking |
Troubleshooting
"Error: 429 Too Many Requests"
You've hit Google Scholar's rate limit. Solutions:
- Increase delays: Set higher
GS_MIN_DELAYandGS_MAX_DELAY - Use a proxy: Set
GS_PROXY_TYPE=freeor use ScraperAPI - Wait and retry: Google Scholar may be temporarily blocking; try again later
"No results found"
- Check your query syntax (Google Scholar supports advanced search operators)
- Ensure the author/paper name is spelled correctly
- Try a simpler query with fewer keywords
"Connection timeout"
- Increase
GS_TIMEOUTif your network is slow - Check your internet connection
- Verify proxy settings if using a proxy
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/your-feature) - Commit your changes with clear messages
- Push to your fork
- Open a pull request
Support
For issues, questions, or feature requests, please open an issue on GitHub.
License
相關伺服器
Needle
Production-ready RAG out of the box to search and retrieve data from your own documents.
中指房产估值MCP
MCP服务器,提供房产小区评级和评估功能
Polymarket Trading MCP
Trading intelligence tools for Polymarket prediction markets: Slippage estimation, liquidity scanning, arbitrage detection, price feeds, wallet intelligence, and portfolio risk.
National Parks
Access real-time information about U.S. National Parks, including park details, alerts, and activities, via the National Park Service (NPS) API.
LeadMagic
Access LeadMagic's B2B data enrichment API suite for email finding, profile enrichment, and company intelligence.
Academic Research MCP Server
Research papers from arXiv, Google Scholar, and Wikipedia with citation metrics
MarineTraffic MCP Server
Provides access to MarineTraffic vessel tracking data.
Not Human Search
MCP search engine for agent-native services. Find 8,600+ APIs, MCP servers, and agentic sites ranked by agentic readiness score (llms.txt, OpenAPI, ai-plugin, MCP) - live MCP endpoint at nothumansearch.ai/mcp with 8 tools.
Hunter.io
Find and verify professional email addresses with the Hunter.io API.
Unsplash MCP Server
Search and integrate images from Unsplash using its official API.