Leporello
Remote MCP for Opera & Classical Music Event Schedules
Leporello
Opera & classical music event schedule remote MCP.
Also available as a web app at leporello.app.
Supported Venues
Missing your favorite venue? PRs welcome! Read CONTRIBUTING.md for step-by-step instructions on adding a new venue scraper.
| Venue | City |
|---|---|
| Staatsoper Stuttgart | Stuttgart |
| Stuttgarter Philharmoniker | Stuttgart |
| Wiener Staatsoper | Vienna |
| Metropolitan Opera | New York |
| Oper Frankfurt | Frankfurt |
| San Francisco Opera | San Francisco |
| Gran Teatre del Liceu | Barcelona |
| Semperoper | Dresden |
| Opéra National de Paris | Paris |
| Carnegie Hall | New York |
| Teatro Real | Madrid |
| Staatsoper Unter den Linden | Berlin |
| Sydney Opera House | Sydney |
| Philharmonie de Paris | Paris |
| Bayerische Staatsoper | München |
How it works
Venue-specific scrapers fetch schedule pages (HTML or JSON) from opera houses and concert halls, parse them with Cheerio, and store the events in a local SQLite database. A node-cron scheduler re-scrapes all venues daily at 03:00 UTC, replacing each venue's events with the latest data. The server exposes the data via a Model Context Protocol (MCP) endpoint, so any MCP-compatible AI assistant can query upcoming performances. A static Astro frontend reads the same database at build time and serves a filterable event listing at leporello.app.
MCP Tools
| Tool | Description |
|---|---|
list_countries | All countries with city/venue counts |
list_cities | All cities with venues, optionally filtered by country |
list_venues | All venues, optionally filtered by country or city |
list_events | Upcoming events filtered by country, city, or venue |
Privacy
The hosted instance at leporello.app logs aggregate MCP usage (tool name, arguments, response time, User-Agent, and a daily-rotating salted hash of the client IP) so I can see what's working and what isn't. No raw IPs, no personal data, no auth tokens.
Run locally
npm install
npm test # run scraper tests
npm run scrape # one-off scrape all venues
#or
npm run scrape -- wiener-staatsoper # scrape a single venue
npm run dev # start server on http://localhost:3000
# or
npm run dev:fresh # scrape, then serve
Scrapes daily at 03:00 UTC. Run npm run scrape to populate the database on first use.
Docker
Two containers: web (HTTP server + MCP + static frontend) and scraper (fetches venue data into SQLite). They share a data volume. The scraper runs once and exits — schedule it with a host cron job.
# Start the web server
docker compose up -d
# Run all scrapers (one-off, container stops when done)
docker compose run --rm scraper
# Scrape a single venue
docker compose run --rm scraper node dist/scrape.js wiener-staatsoper
# Rebuild the Astro frontend (e.g. after a manual scrape)
docker compose exec web node -e "
import { execFile } from 'node:child_process';
import { promisify } from 'node:util';
await promisify(execFile)('npm', ['run', 'build', '--prefix', 'web'], { cwd: '/app', timeout: 60000 });
console.log('done');
"
# Tail logs
docker compose logs -f web
The web container rebuilds the Astro frontend on every start. To run scrapers on a schedule, add a host cron job that scrapes then restarts web:
0 3 * * * cd /home/phil/docker/lep && docker compose run --rm scraper && docker compose restart web
相关服务器
Bright Data
赞助Discover, extract, and interact with the web - one interface powering automated access across the public internet.
AI Shopping Assistant
A conversational AI shopping assistant for web-based product discovery and decision-making.
Notte
Leverage Notte Web AI agents & cloud browser sessions for scalable browser automation & scraping workflows
Readability Parser
Extracts and transforms webpage content into clean, LLM-optimized Markdown using the Readability algorithm.
YouTube Transcript MCP
Download transcripts directly from YouTube videos.
Postman V2
An MCP server that provides access to Postman using V2 api version.
scrape-do-mcp
MCP Server for Scrape.do - Web Scraping & Google Search with anti-bot bypass
DOMShell
Browse the web with filesystem commands. 38 MCP tools let AI agents ls, cd, grep, click, and type through Chrome via a Chrome Extension.
MCP URL Format Converter
Fetches content from any URL and converts it to HTML, JSON, Markdown, or plain text.
Document Extractor MCP Server
Extracts document content from Microsoft Learn and GitHub URLs and stores it in PocketBase for retrieval and search.
Crawl4AI MCP Server
An MCP server for advanced web crawling, content extraction, and AI-powered analysis using the crawl4ai library.