Leporello

Remote MCP for Opera & Classical Music Event Schedules

Leporello

Leporello

Opera & classical music event schedule remote MCP.
Also available as a web app at leporello.app.

Supported Venues

Missing your favorite venue? PRs welcome! Read CONTRIBUTING.md for step-by-step instructions on adding a new venue scraper.

VenueCity
Staatsoper StuttgartStuttgart
Stuttgarter PhilharmonikerStuttgart
Wiener StaatsoperVienna
Metropolitan OperaNew York
Oper FrankfurtFrankfurt
San Francisco OperaSan Francisco
Gran Teatre del LiceuBarcelona
SemperoperDresden
Opéra National de ParisParis
Carnegie HallNew York
Teatro RealMadrid
Staatsoper Unter den LindenBerlin
Sydney Opera HouseSydney
Philharmonie de ParisParis
Bayerische StaatsoperMünchen

How it works

Venue-specific scrapers fetch schedule pages (HTML or JSON) from opera houses and concert halls, parse them with Cheerio, and store the events in a local SQLite database. A node-cron scheduler re-scrapes all venues daily at 03:00 UTC, replacing each venue's events with the latest data. The server exposes the data via a Model Context Protocol (MCP) endpoint, so any MCP-compatible AI assistant can query upcoming performances. A static Astro frontend reads the same database at build time and serves a filterable event listing at leporello.app.

MCP Tools

ToolDescription
list_countriesAll countries with city/venue counts
list_citiesAll cities with venues, optionally filtered by country
list_venuesAll venues, optionally filtered by country or city
list_eventsUpcoming events filtered by country, city, or venue

Privacy

The hosted instance at leporello.app logs aggregate MCP usage (tool name, arguments, response time, User-Agent, and a daily-rotating salted hash of the client IP) so I can see what's working and what isn't. No raw IPs, no personal data, no auth tokens.

Run locally

npm install
npm test          # run scraper tests

npm run scrape                       # one-off scrape all venues
#or
npm run scrape -- wiener-staatsoper  # scrape a single venue

npm run dev       # start server on http://localhost:3000
# or
npm run dev:fresh # scrape, then serve

Scrapes daily at 03:00 UTC. Run npm run scrape to populate the database on first use.

Docker

Two containers: web (HTTP server + MCP + static frontend) and scraper (fetches venue data into SQLite). They share a data volume. The scraper runs once and exits — schedule it with a host cron job.

# Start the web server
docker compose up -d

# Run all scrapers (one-off, container stops when done)
docker compose run --rm scraper

# Scrape a single venue
docker compose run --rm scraper node dist/scrape.js wiener-staatsoper

# Rebuild the Astro frontend (e.g. after a manual scrape)
docker compose exec web node -e "
import { execFile } from 'node:child_process';
import { promisify } from 'node:util';
await promisify(execFile)('npm', ['run', 'build', '--prefix', 'web'], { cwd: '/app', timeout: 60000 });
console.log('done');
"

# Tail logs
docker compose logs -f web

The web container rebuilds the Astro frontend on every start. To run scrapers on a schedule, add a host cron job that scrapes then restarts web:

0 3 * * * cd /home/phil/docker/lep && docker compose run --rm scraper && docker compose restart web

संबंधित सर्वर

NotebookLM Web Importer

एक क्लिक में वेब पेज और YouTube वीडियो NotebookLM में आयात करें। 200,000+ उपयोगकर्ताओं द्वारा विश्वसनीय।

Chrome एक्सटेंशन इंस्टॉल करें