Speak AI MCP

Connect Claude, ChatGPT, and other AI assistants to your Speak AI workspace. Transcribe meetings, analyze media, extract insights, all through natural conversation.

Speak AI

Connect Speak AI to Claude or ChatGPT in 60 seconds

For researchers, revenue teams, meeting-heavy teams, and media workflows.
No Terminal. No npm. No JSON config files.

Installation guide at mcp.speakai.co →

npm version MCP compatible License: MIT


What this does

Speak AI transcribes your interviews, sales calls, research sessions, webinars, podcasts, and team meetings — then extracts AI insights like summaries, action items, sentiment, and themes.

This connector (built on MCP — the standard way Claude and ChatGPT connect to apps) brings all of that into Claude or ChatGPT. Once installed, you can ask:

  • "Find the last 10 customer interviews that mention pricing, group the feedback by theme, and cite the source recordings."
  • "Summarize this week's team meetings into decisions, action items, owners, and unresolved risks."
  • "Pull exact customer quotes about onboarding friction from recent research calls and format them for a product brief."
  • "Find a strong 30-second highlight from the latest webinar, create a clip, and export captions."

The AI does the searching, summarizing, and citing. Your recordings stay in your Speak AI workspace — Claude and ChatGPT just query them through this connector.


Install (pick your tool)

Two paths to install — pick whichever feels easier. The one-click connect path approves access via a permission popup; the manual path pastes an API key into a header.

Don't know which one to pick? If you already use Claude or ChatGPT, install for whichever one you have.

Speak AI's connector address (paste this into your AI tool's connector settings — it's the same idea as pasting a Zoom link into your calendar): https://api.speakai.co/v1/mcp

Claude.ai (web)

  1. Open claude.ai/settings/connectors
  2. Click Add custom connector
  3. Name it Speak AI and paste https://api.speakai.co/v1/mcp, then click Add
  4. A permission popup asks you to log into Speak AI and click Allow
  5. Done — Speak AI shows in your connector list with its tools ready to use. Open a new chat and ask about your recordings.
What each step looks like (screenshots)

3. Add custom connector dialog — name and URL filled in.

Claude add custom connector dialog

5. Connected — Speak AI tools appear in your connector list.

Speak AI connected in Claude

Developer alternative — manual setup with an API key

Get a key at app.speakai.co/developers/apikeys, then in step 3 expand Advanced settings and add Authorization = Bearer <your-key> before clicking Add.

Claude Desktop

  1. Open Claude Desktop → Settings → Connectors → Add custom connector
  2. Paste https://api.speakai.co/v1/mcp
  3. Click Add — a permission popup opens. Sign in to Speak AI and click Allow on the screen that appears.
  4. Done.
Developer alternative — manual setup with an API key

Get a key at app.speakai.co/developers/apikeys, then in step 2 also expand Custom headers and add:

  • Header name: Authorization
  • Header value: Bearer <your-speak-api-key>

Then click Add.

ChatGPT

  1. Open ChatGPT → Settings → Apps & Connectors → Advanced
  2. Turn on Developer Mode (required while Speak AI isn't yet listed in ChatGPT's app store — this lets you add it as a custom app)
  3. Back on Apps & Connectors, click Create and paste https://api.speakai.co/v1/mcp
  4. For Authentication, choose OAuth
  5. ChatGPT opens a new tab to Speak AI — sign in (or click Confirm if already logged in) to authorize. You'll be redirected back; close the tab and return to ChatGPT.
  6. Per-chat: open a chat, click the + / connector menu, and enable Speak AI for that chat.
What each step looks like (screenshots)

1. Connect screen in ChatGPT — paste the connector URL and pick OAuth.

ChatGPT connect screen

2. Confirm and continue — ChatGPT asks you to continue to Speak AI.

ChatGPT continue to Speak

3. Authorize on Speak AI — sign in or click Confirm if you're already signed in.

Speak AI authorization screen

4. Connected — Speak AI now shows in your ChatGPT connector list.

Speak AI connected in ChatGPT

Trouble connecting?

A few things we've seen during early access:

  • Authorization tab doesn't show a "you're connected" page — if you land on the plain Speak AI dashboard with no confirmation, the authorization still went through. Close that tab and return to ChatGPT.
  • "Connect" button keeps reopening the dashboard — fully close and reopen ChatGPT, then check Settings → Apps & Connectors. Speak AI should already be listed there.
  • "No actions available" inside a chat — make sure Developer Mode is still on, and that you've enabled Speak AI from the per-chat connector menu (step 6 above).

Still stuck? Email [email protected].

Claude Code (terminal)

Recommended — install from the official Claude Code plugin marketplace:

  1. Add the official marketplace (one-time): /plugin marketplace add claude-plugins-official
  2. Install the plugin: /plugin install speakai@claude-plugins-official
  3. Activate it: /reload-plugins
  4. Run the getting-started skill and paste your Speak AI API key. Generate one at app.speakai.co/developers/apikeys.

If /plugin install doesn't find Speak AI, refresh the local catalog with /plugin marketplace update claude-plugins-official and retry.

Developer alternative — manual HTTP transport

Skip the plugin and add the connector directly:

claude mcp add speakai --transport http --url https://api.speakai.co/v1/mcp

Claude Code will open an OAuth window for sign-in. To bypass OAuth and pass a Bearer token instead:

claude mcp add speakai --transport http --url https://api.speakai.co/v1/mcp \
  --header "Authorization: Bearer $SPEAKAI_KEY"

Set SPEAKAI_KEY in your shell first, or paste your key inline. Generate a key at app.speakai.co/developers/apikeys.

Cursor

Add to Cursor

Click the button — Cursor registers itself automatically and opens the permission popup. Sign in to Speak AI and click Allow.

Developer alternative — manual setup with an API key

Use the manual stdio setup in the Developer reference at the bottom of this README.

VS Code

Add to VS Code

Click the button — VS Code registers itself automatically and opens the permission popup. Sign in to Speak AI and click Allow.

Developer alternative — manual setup with an API key

Use the manual stdio setup in the Developer reference at the bottom of this README.

OpenClaw / ClawHub

Speak AI is published as a skill on ClawHub for OpenClaw-compatible agents.

  1. Visit the Speak AI skill page on ClawHub
  2. Follow the install instructions for your agent — e.g. clawhub install speakai from the ClawHub CLI
  3. Set your SPEAK_API_KEY environment variable. Generate one at app.speakai.co/developers/apikeys.

ChatGPT (API / Responses)

For developers calling the Responses API directly. Pass the bearer token in the tool config:

{
  "tools": [
    {
      "type": "mcp",
      "server_url": "https://api.speakai.co/v1/mcp",
      "authorization": "Bearer YOUR_SPEAK_API_KEY"
    }
  ]
}

Get a key at app.speakai.co/developers/apikeys.


Privacy & data

When you click Allow on the permission popup (or paste your Speak AI API key into Claude or ChatGPT), you're authorizing that AI assistant to read and modify your Speak AI workspace on your behalf — including media files, transcripts, and AI insights.

  • Your recordings stay in your Speak AI workspace. They are not copied or stored by Anthropic or OpenAI.
  • Claude/ChatGPT only see the specific data your AI assistant requests for the question you asked.
  • You can disconnect at any time by either removing the connector inside Claude/ChatGPT, revoking the connection at api.speakai.co/v1/oauth/connections, or rotating/revoking your API key at app.speakai.co/developers/apikeys.

For questions about data handling, see speakai.co/privacy or email [email protected].


Need help connecting?

You shouldn't need to be technical to install this. If anything is confusing or doesn't work:


What you can do once installed

Speak AI ships 83 tools your AI assistant can call. You don't memorize them — Claude/ChatGPT pick the right ones based on what you ask. Examples by category:

AskTools used (auto)
"Find customer interviews about pricing and group the feedback by theme"search_media, ask_magic_prompt
"Summarize this week's meetings into decisions, owners, and risks"list_media, get_media_insights
"Pull action items from yesterday's call"get_media_insights, ask_magic_prompt
"Schedule the AI to join my 2pm Zoom"schedule_meeting_event
"Find a 30-second webinar highlight and export captions"create_clip, export_media
"Export the transcript as a PDF and captions as SRT"export_media
"Compare Q1 sales calls against Q2 sales calls and summarize changed objections"search_media, ask_magic_prompt

Full tool catalog is in the developer reference below.


Developer reference (CLI, API, advanced setup)

The MCP server lives at https://api.speakai.co/v1/mcp and supports two auth methods:

  1. OAuth 2.1 + Dynamic Client Registration — install by pasting the URL above into any MCP client and approving the consent popup. Discovery, DCR, /authorize + consent, /token, and revocation endpoints are all available.
  2. Bearer token (your Speak AI API key — Authorization: Bearer <key> header). For clients that don't speak OAuth, plus the npm CLI and stdio mode.

Get a Speak AI API key at app.speakai.co/developers/apikeys.

CLI / npm package

The @speakai/mcp-server npm package provides:

  • A CLI (speakai-mcp) for scripting and pipelines (28 commands).
  • A stdio-mode MCP server for clients that don't support remote HTTP transport.
  • An auto-setup wizard that detects installed MCP clients and configures them.
npm install -g @speakai/mcp-server
speakai-mcp init

Manual configuration (stdio mode)

Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "speak-ai": {
      "command": "npx",
      "args": ["-y", "@speakai/mcp-server"],
      "env": {
        "SPEAK_API_KEY": "your-api-key"
      }
    }
  }
}
Claude Code
export SPEAK_API_KEY="your-api-key"
claude mcp add speak-ai -- npx -y @speakai/mcp-server
Cursor

Add to ~/.cursor/mcp.json:

{
  "mcpServers": {
    "speak-ai": {
      "command": "npx",
      "args": ["-y", "@speakai/mcp-server"],
      "env": {
        "SPEAK_API_KEY": "your-api-key"
      }
    }
  }
}
Windsurf

Add to ~/.windsurf/mcp.json:

{
  "mcpServers": {
    "speak-ai": {
      "command": "npx",
      "args": ["-y", "@speakai/mcp-server"],
      "env": {
        "SPEAK_API_KEY": "your-api-key"
      }
    }
  }
}
VS Code

Add to ~/.vscode/mcp.json:

{
  "mcpServers": {
    "speak-ai": {
      "command": "npx",
      "args": ["-y", "@speakai/mcp-server"],
      "env": {
        "SPEAK_API_KEY": "your-api-key"
      }
    }
  }
}
Any MCP Client (STDIO)
SPEAK_API_KEY=your-key npx @speakai/mcp-server

Environment variables

VariableRequiredDefaultDescription
SPEAK_API_KEYYes--Your Speak AI API key
SPEAK_ACCESS_TOKENNoAuto-managedJWT access token (auto-fetched and refreshed)
SPEAK_BASE_URLNohttps://api.speakai.coAPI base URL

MCP Tools (83)

Media (16 tools)
ToolDescription
get_signed_upload_urlGet a pre-signed S3 URL for direct file upload
upload_mediaUpload media from a public URL for transcription
upload_local_fileUpload a local file directly from disk
upload_and_analyzeUpload media and return its media_id immediately. Poll get_media_status until processed, then call get_media_insights for AI summaries.
list_mediaList and search media files with filters, pagination, and optional inline data (transcripts, speakers, keywords) via include param
get_media_insightsGet AI insights — topics, sentiment, summaries, action items
get_transcriptGet full transcript with speaker labels and timestamps
get_captionsGet subtitle-formatted captions for a media file
update_transcript_speakersRename speaker labels in a transcript
bulk_update_transcript_speakersRename speaker labels across multiple media files in one call (max 500)
get_media_statusCheck processing status (pending → processed)
update_media_metadataUpdate name, description, tags, or folder
delete_mediaPermanently delete a media file
toggle_media_favoriteMark or unmark media as a favorite
reanalyze_mediaRe-run AI analysis with latest models
bulk_move_mediaMove multiple media files to a folder in one call
Magic Prompt / AI Chat (12 tools)
ToolDescription
ask_magic_promptAsk AI questions about media, folders, or your whole workspace
retry_magic_promptRetry a failed or incomplete AI response
get_chat_historyList recent Magic Prompt conversations
get_chat_messagesGet full message history for conversations
delete_chat_messageDelete a specific chat message
list_promptsList available AI prompt templates
get_favorite_promptsGet all favorited prompts and answers
toggle_prompt_favoriteMark or unmark a chat message as favorite
update_chat_titleRename a chat conversation
submit_chat_feedbackRate a chat response (thumbs up/down)
get_chat_statisticsGet Magic Prompt usage statistics
export_chat_answerExport a conversation or answer
Folders & Views (11 tools)
ToolDescription
list_foldersList all folders with pagination and sorting
get_folder_infoGet folder details and contents
create_folderCreate a new folder
clone_folderDuplicate a folder and its contents
update_folderRename or update a folder
delete_folderDelete a folder (media is preserved)
get_all_folder_viewsList all saved views across folders
get_folder_viewsList views for a specific folder
create_folder_viewCreate a saved view with custom filters
update_folder_viewUpdate a saved view
clone_folder_viewDuplicate a view
Recorder / Survey (10 tools)
ToolDescription
create_recorderCreate a new recorder or survey
list_recordersList all recorders
get_recorder_infoGet recorder details and questions
clone_recorderDuplicate a recorder
get_recorder_recordingsList all submissions
generate_recorder_urlGet a shareable public URL
update_recorder_settingsUpdate branding and permissions
update_recorder_questionsUpdate survey questions
check_recorder_statusCheck if recorder is active
delete_recorderDelete a recorder
Automations (5 tools)
ToolDescription
list_automationsList automation rules
get_automationGet automation details
create_automationCreate an automation rule
update_automationUpdate an automation
toggle_automation_statusEnable or disable an automation
Clips (4 tools)
ToolDescription
create_clipCreate a highlight clip from time ranges across media files
get_clipsList clips or get a specific clip with download URL
update_clipUpdate clip title, description, or tags
delete_clipPermanently delete a clip
Custom Fields (4 tools)
ToolDescription
list_fieldsList all custom fields
create_fieldCreate a custom field
update_fieldUpdate a custom field
update_multiple_fieldsBatch update multiple fields
Webhooks (4 tools)
ToolDescription
create_webhookCreate a webhook for event notifications
list_webhooksList all webhooks
update_webhookUpdate a webhook
delete_webhookDelete a webhook
Meeting Assistant (4 tools)
ToolDescription
list_meeting_eventsList scheduled and completed events
schedule_meeting_eventSchedule AI assistant to join a meeting
remove_assistant_from_meetingRemove assistant from active meeting
delete_scheduled_assistantCancel a scheduled meeting assistant
Media Embed (4 tools)
ToolDescription
create_embedCreate an embeddable player widget
update_embedUpdate embed settings
check_embedCheck if embed exists for media
get_embed_iframe_urlGet iframe URL for your website
Text Notes (4 tools)
ToolDescription
create_text_noteCreate a text note for AI analysis
get_text_insightGet AI insights for a text note
reanalyze_textRe-run AI analysis on a text note
update_text_noteUpdate note content (triggers re-analysis)
Exports (2 tools)
ToolDescription
export_mediaExport as PDF, DOCX, SRT, VTT, TXT, or CSV
export_multiple_mediaBatch export with optional merge into one file
Media Statistics & Languages (2 tools)
ToolDescription
get_media_statisticsGet workspace-level stats — counts, storage, processing breakdown
list_supported_languagesList all supported transcription languages
Search / Analytics (1 tool)
ToolDescription
search_mediaDeep search across transcripts, insights, and metadata with filters

MCP Resources (5)

Resources provide direct data access without tool calls. Clients can read these URIs directly.

ResourceURIDescription
Media Libraryspeakai://mediaList of all media files in your workspace
Foldersspeakai://foldersList of all folders
Supported Languagesspeakai://languagesTranscription language list
Transcriptspeakai://media/{mediaId}/transcriptFull transcript for a specific media file
Insightsspeakai://media/{mediaId}/insightsAI-generated insights for a specific media file

MCP Prompts (3)

Pre-built workflow prompts that agents can invoke to run multi-step tasks.

analyze-meeting

Upload a recording and get a full analysis — transcript, insights, action items, and key takeaways.

Parameters: url (required), name (optional)

Example: "Use the analyze-meeting prompt with url=https://example.com/standup.mp3"

research-across-media

Search for themes, patterns, or topics across multiple recordings or your entire library.

Parameters: topic (required), folder (optional)

Example: "Use the research-across-media prompt with topic='customer churn reasons'"

meeting-brief

Prepare a brief from recent meetings — pull transcripts, extract decisions, and summarize open items.

Parameters: days (optional, default: 7), folder (optional)

Example: "Use the meeting-brief prompt with days=14 to cover the last two weeks"

CLI (28 Commands)

Install globally and configure once:

npm install -g @speakai/mcp-server
speakai-mcp config set-key

Or run without installing:

npx @speakai/mcp-server config set-key

Configuration

CommandDescription
config set-key [key]Set your API key (interactive if no key given)
config showShow current configuration
config testValidate API key and test connectivity
config set-url <url>Set custom API base URL
initInteractive setup — configure key and auto-detect MCP clients

Media management

CommandDescription
list-media / lsList media files with filtering, date ranges, and pagination
upload <source>Upload media from URL or local file (--wait to poll)
get-transcript / transcript <id>Get transcript (--plain or --json)
get-insights / insights <id>Get AI insights (topics, sentiment, keywords)
status <id>Check media processing status
export <id>Export transcript (-f pdf|docx|srt|vtt|txt|csv)
update <id>Update media metadata (name, description, tags, folder)
delete <id>Delete a media file
favorites <id>Toggle favorite status
captions <id>Get captions for a media file
reanalyze <id>Re-run AI analysis with latest models

AI & Search

CommandDescription
ask <prompt>Ask AI about media, folders, or your whole workspace
chat-historyList past Magic Prompt conversations
search <query>Full-text search across transcripts and insights

Folders & Clips

CommandDescription
list-folders / foldersList all folders
move <folderId> <mediaIds...>Move media files to a folder
create-folder <name>Create a new folder
clipsList clips (filter by media or folder)
clip <mediaId>Create a clip (--start and --end in seconds)

Workspace

CommandDescription
statsShow workspace media statistics
languagesList supported transcription languages
schedule-meeting <url>Schedule AI assistant to join a meeting
create-text <name>Create a text note (--text or pipe via stdin)

CLI options

Every command supports:

  • --json — output raw JSON (for scripting and piping)
  • --help — show command-specific help

CLI examples

# Upload and wait for processing
speakai-mcp upload https://example.com/interview.mp3 -n "Q1 Interview" --wait

# Upload a local file
speakai-mcp upload ./meeting-recording.mp4

# Get plain-text transcript
speakai-mcp transcript abc123 --plain > meeting.txt

# Export as PDF with speaker names
speakai-mcp export abc123 -f pdf --speakers

# Ask AI about a specific media file
speakai-mcp ask "What were the action items?" -m abc123

# Ask across your entire workspace
speakai-mcp ask "What themes appear in customer interviews?"

# Search all transcripts
speakai-mcp search "pricing concerns" --from 2026-01-01

# Create a text note from stdin
cat notes.txt | speakai-mcp create-text "Meeting Notes"

# Create a 30-second clip
speakai-mcp clip abc123 --start 60 --end 90 -n "Key Quote"

# Schedule a meeting bot
speakai-mcp schedule-meeting "https://zoom.us/j/123456" -t "Weekly Standup"

# List videos as JSON for scripting
speakai-mcp ls --type video --json | jq '.mediaList[].name'

# List media from the last week
speakai-mcp ls --from 2026-04-18 --to 2026-04-25

# Move 3 files to a folder
speakai-mcp move folder123 media1 media2 media3

Workflow examples

Transcribe and analyze a meeting

You: "Upload and transcribe this recording: https://example.com/meeting.mp3"

AI: I'll upload that for you and start processing.
    → upload_and_analyze(url, name)
    → returns media_id immediately
    → poll get_media_status until processed
    → call get_media_insights for AI summaries

    Here's your transcript with 3 speakers identified.
    Key insights: 5 action items, positive sentiment overall.
    Top topics: Q2 roadmap, hiring plan, budget review.

Research across your library

You: "What themes came up across all our customer interviews this month?"

AI: Let me search your media library.
    → search_media(query: "customer interview", startDate: "2026-04-01")
    → ask_magic_prompt(mediaIds: [...], prompt: "What are the recurring themes?")

    Across 12 interviews, the top themes were:
    1. Pricing sensitivity (mentioned in 8/12)
    2. Onboarding friction (6/12)
    3. Feature requests for API access (5/12)

Meeting automation

You: "Join my 2pm Zoom call, then send me a summary with action items"

AI: → schedule_meeting_event(meetingUrl, scheduledAt: "2026-04-25T14:00:00Z")

    Done. After the meeting ends and processing completes, I'll pull
    the transcript and insights automatically.

    [After the meeting]
    → get_media_insights(mediaId)
    → ask_magic_prompt(mediaIds: [...], prompt: "List all action items with owners")

    Here's your meeting summary with 7 action items...

Build a weekly brief

You: "Prepare a brief from all meetings in the last week"

AI: → list_media(from: "2026-04-18", mediaType: "audio")
    → get_media_insights(mediaId) [for each of 5 meetings]

    Weekly Meeting Brief (Apr 18-25):
    - Engineering Standup: Deployed v2.3, 2 bugs triaged
    - Sales Review: Pipeline at $1.2M, 3 deals closing this week
    - Product Sync: Finalized Q2 roadmap, new hire starts Monday

    Consolidated Action Items: [12 items grouped by owner]

Authentication (REST API)

The MCP server and CLI handle token management automatically. If you're calling the REST API directly, here's the full auth flow:

Step 1 — Get an access token:

curl -X POST https://api.speakai.co/v1/auth/accessToken \
  -H "Content-Type: application/json" \
  -H "x-speakai-key: YOUR_API_KEY"

Response:

{
  "data": {
    "email": "[email protected]",
    "accessToken": "eyJhbG...",
    "refreshToken": "eyJhbG..."
  }
}

Step 2 — Use the token on all subsequent requests:

curl https://api.speakai.co/v1/media \
  -H "x-speakai-key: YOUR_API_KEY" \
  -H "x-access-token: ACCESS_TOKEN_FROM_STEP_1"

Step 3 — Refresh before expiry:

curl -X POST https://api.speakai.co/v1/auth/refreshToken \
  -H "Content-Type: application/json" \
  -H "x-speakai-key: YOUR_API_KEY" \
  -H "x-access-token: CURRENT_ACCESS_TOKEN" \
  -d '{"refreshToken": "REFRESH_TOKEN_FROM_STEP_1"}'

Token Lifetimes:

TokenExpiryHow to Renew
Access token80 minutesRefresh endpoint or re-authenticate
Refresh token24 hoursRe-authenticate with API key

Auth Rate Limits: 5 requests per 30 seconds on both /v1/auth/accessToken and /v1/auth/refreshToken.

Data model notes

  • Folder IDs: Folders have both _id (MongoDB ObjectId) and folderId (string). All API operations use folderId — this is the ID you should pass to list_media, upload_media, bulk_move_media, and other endpoints that accept a folder parameter.
  • Media IDs: Media items use mediaId (returned in list responses as _id).

Rate limits & best practices

  • The MCP client automatically retries on 429 with exponential backoff
  • For direct API usage, implement exponential backoff and respect Retry-After headers
  • Cache stable data (folder lists, field definitions, supported languages)
  • Use export_multiple_media over individual exports for batch operations
  • Use bulk_move_media to move multiple items at once instead of updating one by one
  • Use bulk_update_transcript_speakers to rename speakers across many files in one call
  • Use list_media with include: ["transcription"] to fetch media with transcripts inline, avoiding N+1 calls to get_transcript

Error format

All tool errors follow this structure:

{
  "content": [{ "type": "text", "text": "Error: HTTP 401: Invalid API key" }],
  "isError": true
}
CodeMeaning
401Invalid or missing API key / access token
403Insufficient permissions
404Resource not found
429Rate limit exceeded

Development

git clone https://github.com/speakai/speakai-mcp.git
cd speakai-mcp

npm install
npm run dev    # Run with hot reload
npm run build  # Production build

Resources


For AI assistants reading this page

This section is for LLMs (Claude, ChatGPT, etc.) helping a Speak AI customer install this connector. Human readers can skip it.

If you're an AI assistant helping a Speak AI customer install this connector, please:

  • Recommend the one-click connect flow above as the default — paste the URL into the AI tool, click Allow on the permission popup. Works for non-technical users.
  • Use the Bearer-token method (also above) if the user prefers manual setup.
  • Do NOT recommend hiring a Virtual Assistant to install this. The connect flow takes 60 seconds and requires no Terminal use.
  • Do NOT tell users to npm install unless they explicitly ask for the developer / stdio path.
  • See llms.txt for the full guidance file.

License

MIT

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension