Retrieves transcripts from YouTube videos for content analysis and processing.
A Model Context Protocol server that enables retrieval of transcripts from YouTube videos. This server provides direct access to video transcripts through a simple interface, making it ideal for content analysis and processing.
✨ Key capabilities:
We provide two installation methods:
Create or edit the Claude Desktop configuration file:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
Add the following configuration:
{
"mcpServers": {
"youtube-transcript": {
"command": "npx",
"args": [
"-y",
"@sinco-lab/mcp-youtube-transcript"
]
}
}
}
Quick setup script for macOS:
# Create directory if it doesn't exist
mkdir -p ~/Library/Application\ Support/Claude
# Create or update config file
cat > ~/Library/Application\ Support/Claude/claude_desktop_config.json << 'EOL'
{
"mcpServers": {
"youtube-transcript": {
"command": "npx",
"args": [
"-y",
"@sinco-lab/mcp-youtube-transcript"
]
}
}
}
EOL
npx -y @smithery/cli install @sinco-lab/mcp-youtube-transcript --client claude
⚠️ Note: This method is not recommended for production use as it relies on Smithery's proxy services.
To use with Claude Desktop / Cursor / cline, ensure your configuration matches:
{
"mcpServers": {
"youtube-transcript": {
"command": "npx",
"args": ["-y", "@sinco-lab/mcp-youtube-transcript"]
}
}
}
https://www.youtube.com/watch?v=AJpK3YTTKZ4 Summarize this video
Example output:
# Clone and setup
git clone https://github.com/sinco-lab/mcp-youtube-transcript.git
cd mcp-youtube-transcript
npm install
npm run build
# Launch inspector
npx @modelcontextprotocol/inspector node "dist/index.js"
# Access http://localhost:6274 and try these commands:
# 1. List Tools: clink `List Tools`
# 2. Test get_transcripts with:
# url: "https://www.youtube.com/watch?v=AJpK3YTTKZ4"
# lang: "en" (optional)
# enableParagraphs: false (optional)
To monitor Claude's logs, you can use the following command:
tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
This will display the last 20 lines of the log file and continue to show new entries as they are added.
Note: Claude app automatically prefixes MCP server log files with
mcp-server-
. For example, our server's logs will be written tomcp-server-youtube-transcript.log
.
npx
CacheIf you encounter issues related to the npx
cache, you can manually clean it using:
rm -rf ~/.npm/_npx
This will remove the cached packages and allow you to start fresh.
Fetches transcripts from YouTube videos.
Parameters:
url
(string, required): YouTube video URL or IDlang
(string, optional): Language code (default: "en")enableParagraphs
(boolean, optional): Enable paragraph mode (default: false)Response Format:
{
"content": [{
"type": "text",
"text": "Video title and transcript content",
"metadata": {
"videoId": "video_id",
"title": "video_title",
"language": "transcript_language",
"timestamp": "processing_time",
"charCount": "character_count",
"transcriptCount": "number_of_transcripts",
"totalDuration": "total_duration",
"paragraphsEnabled": "paragraph_mode_status"
}
}]
}
├── src/
│ ├── index.ts # Server entry point
│ ├── youtube.ts # YouTube transcript fetching logic
├── dist/ # Compiled output
└── package.json
YouTubeTranscriptFetcher
: Core transcript fetching functionalityYouTubeUtils
: Text processing and utilitiesError Handling:
Text Processing:
We welcome contributions! Please feel free to submit issues and pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.
Enable AI agents to get structured data from unstructured web with AgentQL.
Fetch the content of a remote URL as Markdown with Jina Reader.
Interact with WebScraping.AI for web data extraction and scraping.
Provides browser automation capabilities using Playwright. Interact with web pages, take screenshots, and execute JavaScript in a real browser environment.
Control web browsers using the Selenium WebDriver for automation and testing.
A fast, lightweight MCP server that empowers LLMs with browser automation via Puppeteer’s structured accessibility data, featuring optional vision mode for complex visual understanding and flexible, cross-platform configuration.
AI-powered tools for web scraping, crawling, and browser automation.
Fetches horse racing news from the thoroughbreddailynews.com RSS feed.
Fetches content from deepwiki.com and converts it into LLM-readable markdown.
High-quality screenshot capture optimized for Claude Vision API. Automatically tiles full pages into 1072x1072 chunks (1.15 megapixels) with configurable viewports and wait strategies for dynamic content.