LinkedIn MCP
Scrape LinkedIn profiles and companies, get recommended jobs, and perform job searches.
LinkedIn MCP Server
Through this LinkedIn MCP server, AI assistants like Claude can connect to your LinkedIn. Access profiles and companies, search for jobs, or get job details.
Installation Methods
https://github.com/user-attachments/assets/eb84419a-6eaf-47bd-ac52-37bc59c83680
Usage Examples
Research the background of this candidate https://www.linkedin.com/in/stickerdaniel/
Get this company profile for partnership discussions https://www.linkedin.com/company/inframs/
Suggest improvements for my CV to target this job posting https://www.linkedin.com/jobs/view/4252026496
What has Anthropic been posting about recently? https://www.linkedin.com/company/anthropicresearch/
Features & Tool Status
| Tool | Description | Status |
|---|---|---|
get_person_profile | Get profile info with explicit section selection (experience, education, interests, honors, languages, contact_info, posts) | Working |
get_company_profile | Extract company information with explicit section selection (posts, jobs) | Working |
get_company_posts | Get recent posts from a company's LinkedIn feed | Working |
search_jobs | Search for jobs with keywords and location filters | Working |
search_people | Search for people by keywords and location | Working |
get_job_details | Get detailed information about a specific job posting | Working |
close_session | Close browser session and clean up resources | Working |
Tool responses keep readable sections text and may also include a compact references map keyed by section. Each reference includes a typed target, a relative LinkedIn path (or absolute external URL), and a short label/context when available.
When one section fails but the overall tool call still completes, responses may also include section_errors. Each entry contains structured diagnostics for that section, including the error type/message, a compact runtime summary, trace/log locations, matching-open-issue hints when available, and the path to a generated issue-ready markdown report with the full session details.
[!IMPORTANT] Breaking change: LinkedIn recently made some changes to prevent scraping. The newest version uses Patchright with persistent browser profiles instead of Playwright with session files. Old
session.jsonfiles andLINKEDIN_COOKIEenv vars are no longer supported. Run--loginagain to create a new profile + cookie file that can be mounted in docker. 02/2026
🚀 uvx Setup (Recommended - Universal)
Prerequisites: Install uv and run uvx patchright install chromium to set up the browser.
Installation
Step 1: Create a session (first time only)
uvx linkedin-scraper-mcp --login
This opens a browser for you to log in manually (5 minute timeout for 2FA, captcha, etc.). The browser profile is saved to ~/.linkedin-mcp/profile/.
Step 2: Client Configuration:
{
"mcpServers": {
"linkedin": {
"command": "uvx",
"args": ["linkedin-scraper-mcp"]
}
}
}
[!NOTE] Sessions may expire over time. If you encounter authentication issues, run
uvx linkedin-scraper-mcp --loginagain
uvx Setup Help
🔧 Configuration
Transport Modes:
- Default (stdio): Standard communication for local MCP servers
- Streamable HTTP: For web-based MCP server
- If no transport is specified, the server defaults to
stdio - An interactive terminal without explicit transport shows a chooser prompt
CLI Options:
--login- Open browser to log in and save persistent profile--no-headless- Show browser window (useful for debugging scraping issues)--log-level {DEBUG,INFO,WARNING,ERROR}- Set logging level (default: WARNING)--transport {stdio,streamable-http}- Optional: force transport mode (default: stdio)--host HOST- HTTP server host (default: 127.0.0.1)--port PORT- HTTP server port (default: 8000)--path PATH- HTTP server path (default: /mcp)--logout- Clear stored LinkedIn browser profile--timeout MS- Browser timeout for page operations in milliseconds (default: 5000)--user-data-dir PATH- Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)--chrome-path PATH- Path to Chrome/Chromium executable (for custom browser installations)
Basic Usage Examples:
# Create a session interactively
uvx linkedin-scraper-mcp --login
# Run with debug logging
uvx linkedin-scraper-mcp --log-level DEBUG
HTTP Mode Example (for web-based MCP clients):
uvx linkedin-scraper-mcp --transport streamable-http --host 127.0.0.1 --port 8080 --path /mcp
Runtime server logs are emitted by FastMCP/Uvicorn.
Tool calls are serialized within a single server process to protect the shared
LinkedIn browser session. Concurrent client requests queue instead of running in
parallel. Use --log-level DEBUG to see scraper lock wait/acquire/release logs.
Test with mcp inspector:
- Install and run mcp inspector
bunx @modelcontextprotocol/inspector - Click pre-filled token url to open the inspector in your browser
- Select
Streamable HTTPasTransport Type - Set
URLtohttp://localhost:8080/mcp - Connect
- Test tools
❗ Troubleshooting
Installation issues:
- Ensure you have uv installed:
curl -LsSf https://astral.sh/uv/install.sh | sh - Check uv version:
uv --version(should be 0.4.0 or higher)
Session issues:
- Browser profile is stored at
~/.linkedin-mcp/profile/ - Make sure you have only one active LinkedIn session at a time
Login issues:
- LinkedIn may require a login confirmation in the LinkedIn mobile app for
--login - You might get a captcha challenge if you logged in frequently. Run
uvx linkedin-scraper-mcp --loginwhich opens a browser where you can solve it manually.
Timeout issues:
- If pages fail to load or elements aren't found, try increasing the timeout:
--timeout 10000 - Users on slow connections may need higher values (e.g., 15000-30000ms)
- Can also set via environment variable:
TIMEOUT=10000
Custom Chrome path:
- If Chrome is installed in a non-standard location, use
--chrome-path /path/to/chrome - Can also set via environment variable:
CHROME_PATH=/path/to/chrome
🐳 Docker Setup
Prerequisites: Make sure you have Docker installed and running.
Authentication
Docker runs headless (no browser window), so you need to create a browser profile locally first and mount it into the container.
Step 1: Create profile on the host (one-time setup)
uvx linkedin-scraper-mcp --login
This opens a browser window where you log in manually (5 minute timeout for 2FA, captcha, etc.). The browser profile and cookies are saved under ~/.linkedin-mcp/. On startup, Docker derives a Linux browser profile from your host cookies and creates a fresh session each time. If you experience stability issues with Docker, consider using the uvx setup instead.
Step 2: Configure Claude Desktop with Docker
{
"mcpServers": {
"linkedin": {
"command": "docker",
"args": [
"run", "--rm", "-i",
"-v", "~/.linkedin-mcp:/home/pwuser/.linkedin-mcp",
"stickerdaniel/linkedin-mcp-server:latest"
]
}
}
}
[!NOTE] Docker creates a fresh session on each startup. Sessions may expire over time — run
uvx linkedin-scraper-mcp --loginagain if you encounter authentication issues.
[!NOTE] Why can't I run
--loginin Docker? Docker containers don't have a display server. Create a profile on your host using the uvx setup and mount it into Docker.
Docker Setup Help
🔧 Configuration
Transport Modes:
- Default (stdio): Standard communication for local MCP servers
- Streamable HTTP: For a web-based MCP server
- If no transport is specified, the server defaults to
stdio - An interactive terminal without explicit transport shows a chooser prompt
CLI Options:
--log-level {DEBUG,INFO,WARNING,ERROR}- Set logging level (default: WARNING)--transport {stdio,streamable-http}- Optional: force transport mode (default: stdio)--host HOST- HTTP server host (default: 127.0.0.1)--port PORT- HTTP server port (default: 8000)--path PATH- HTTP server path (default: /mcp)--logout- Clear all stored LinkedIn auth state, including source and derived runtime profiles--timeout MS- Browser timeout for page operations in milliseconds (default: 5000)--user-data-dir PATH- Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)--chrome-path PATH- Path to Chrome/Chromium executable (rarely needed in Docker)
[!NOTE]
--loginand--no-headlessare not available in Docker (no display server). Use the uvx setup to create profiles.
HTTP Mode Example (for web-based MCP clients):
docker run -it --rm \
-v ~/.linkedin-mcp:/home/pwuser/.linkedin-mcp \
-p 8080:8080 \
stickerdaniel/linkedin-mcp-server:latest \
--transport streamable-http --host 0.0.0.0 --port 8080 --path /mcp
Runtime server logs are emitted by FastMCP/Uvicorn.
Test with mcp inspector:
- Install and run mcp inspector
bunx @modelcontextprotocol/inspector - Click pre-filled token url to open the inspector in your browser
- Select
Streamable HTTPasTransport Type - Set
URLtohttp://localhost:8080/mcp - Connect
- Test tools
❗ Troubleshooting
Docker issues:
- Make sure Docker is installed
- Check if Docker is running:
docker ps
Login issues:
- Make sure you have only one active LinkedIn session at a time
- LinkedIn may require a login confirmation in the LinkedIn mobile app for
--login - You might get a captcha challenge if you logged in frequently. Run
uvx linkedin-scraper-mcp --loginwhich opens a browser where you can solve captchas manually. See the uvx setup for prerequisites. - If Docker auth becomes stale after you re-login on the host, restart Docker once so it can fresh-bridge from the new source session generation.
Timeout issues:
- If pages fail to load or elements aren't found, try increasing the timeout:
--timeout 10000 - Users on slow connections may need higher values (e.g., 15000-30000ms)
- Can also set via environment variable:
TIMEOUT=10000
Custom Chrome path:
- If Chrome is installed in a non-standard location, use
--chrome-path /path/to/chrome - Can also set via environment variable:
CHROME_PATH=/path/to/chrome
📦 Claude Desktop (DXT Extension)
Prerequisites: Claude Desktop and Docker installed & running
One-click installation for Claude Desktop users:
- Download the DXT extension
- Double-click to install into Claude Desktop
- Create a session:
uvx linkedin-scraper-mcp --login
[!NOTE] Sessions may expire over time. If you encounter authentication issues, run
uvx linkedin-scraper-mcp --loginagain.
DXT Extension Setup Help
❗ Troubleshooting
First-time setup timeout:
-
Claude Desktop has a ~60 second connection timeout
-
If the Docker image isn't cached, the pull may exceed this timeout
-
Fix: Pre-pull the image before first use:
docker pull stickerdaniel/linkedin-mcp-server:2.3.0 -
Then restart Claude Desktop
Docker issues:
- Make sure Docker is installed
- Check if Docker is running:
docker ps
Login issues:
- Make sure you have only one active LinkedIn session at a time
- LinkedIn may require a login confirmation in the LinkedIn mobile app for
--login - You might get a captcha challenge if you logged in frequently. Run
uvx linkedin-scraper-mcp --loginwhich opens a browser where you can solve captchas manually. See the uvx setup for prerequisites.
Timeout issues:
- If pages fail to load or elements aren't found, try increasing the timeout:
--timeout 10000 - Users on slow connections may need higher values (e.g., 15000-30000ms)
- Can also set via environment variable:
TIMEOUT=10000
🐍 Local Setup (Develop & Contribute)
Contributions are welcome! See CONTRIBUTING.md for architecture guidelines and checklists. Please open an issue first to discuss the feature or bug fix before submitting a PR.
Prerequisites: Git and uv installed
Installation
# 1. Clone repository
git clone https://github.com/stickerdaniel/linkedin-mcp-server
cd linkedin-mcp-server
# 2. Install UV package manager (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# 3. Install dependencies
uv sync
uv sync --group dev
# 4. Install Patchright browser
uv run patchright install chromium
# 5. Install pre-commit hooks
uv run pre-commit install
# 6. Create a session (first time only)
uv run -m linkedin_mcp_server --login
# 7. Start the server
uv run -m linkedin_mcp_server
Local Setup Help
🔧 Configuration
CLI Options:
--login- Open browser to log in and save persistent profile--no-headless- Show browser window (useful for debugging scraping issues)--log-level {DEBUG,INFO,WARNING,ERROR}- Set logging level (default: WARNING)--transport {stdio,streamable-http}- Optional: force transport mode (default: stdio)--host HOST- HTTP server host (default: 127.0.0.1)--port PORT- HTTP server port (default: 8000)--path PATH- HTTP server path (default: /mcp)--logout- Clear stored LinkedIn browser profile--timeout MS- Browser timeout for page operations in milliseconds (default: 5000)--status- Check if current session is valid and exit--user-data-dir PATH- Path to persistent browser profile directory (default: ~/.linkedin-mcp/profile)--slow-mo MS- Delay between browser actions in milliseconds (default: 0, useful for debugging)--user-agent STRING- Custom browser user agent--viewport WxH- Browser viewport size (default: 1280x720)--chrome-path PATH- Path to Chrome/Chromium executable (for custom browser installations)--help- Show help
Note: Most CLI options have environment variable equivalents. See
.env.examplefor details.
HTTP Mode Example (for web-based MCP clients):
uv run -m linkedin_mcp_server --transport streamable-http --host 127.0.0.1 --port 8000 --path /mcp
Claude Desktop:
{
"mcpServers": {
"linkedin": {
"command": "uv",
"args": ["--directory", "/path/to/linkedin-mcp-server", "run", "-m", "linkedin_mcp_server"]
}
}
}
stdio is used by default for this config.
❗ Troubleshooting
Login issues:
- Make sure you have only one active LinkedIn session at a time
- LinkedIn may require a login confirmation in the LinkedIn mobile app for
--login - You might get a captcha challenge if you logged in frequently. The
--logincommand opens a browser where you can solve it manually.
Scraping issues:
- Use
--no-headlessto see browser actions and debug scraping problems - Add
--log-level DEBUGto see more detailed logging
Session issues:
- Browser profile is stored at
~/.linkedin-mcp/profile/ - Use
--logoutto clear the profile and start fresh
Python/Patchright issues:
- Check Python version:
python --version(should be 3.12+) - Reinstall Patchright:
uv run patchright install chromium - Reinstall dependencies:
uv sync --reinstall
Timeout issues:
- If pages fail to load or elements aren't found, try increasing the timeout:
--timeout 10000 - Users on slow connections may need higher values (e.g., 15000-30000ms)
- Can also set via environment variable:
TIMEOUT=10000
Custom Chrome path:
- If Chrome is installed in a non-standard location, use
--chrome-path /path/to/chrome - Can also set via environment variable:
CHROME_PATH=/path/to/chrome
Acknowledgements
Built with FastMCP and Patchright.
Use in accordance with LinkedIn's Terms of Service. Web scraping may violate LinkedIn's terms. This tool is for personal use only.
License
This project is licensed under the Apache 2.0 license.
Verwandte Server
Bright Data
SponsorDiscover, extract, and interact with the web - one interface powering automated access across the public internet.
MeteoSwiss Data
Provides weather reports, search, and content from the MeteoSwiss website with multi-language support.
Hacker News
Fetches and parses stories from Hacker News, providing structured data for top, new, ask, show, and job posts.
CodingBaby Browser
A Node.js server that enables AI assistants to control the Chrome browser via WebSocket. Requires the CodingBaby Chrome Extension.
MCP URL2SNAP
A lightweight MCP server that captures screenshots of any URL and returns the image URL. Requires an AbstractAPI key.
YouTube Transcript
A zero-setup server to extract transcripts from YouTube videos on any platform.
Cloudflare Playwright
Control a browser for web automation tasks using Playwright on Cloudflare Workers.
transcriptor-mcp
An MCP server (stdio + HTTP/SSE) that fetches video transcripts/subtitles via yt-dlp, with pagination for large responses. Supports YouTube, Twitter/X, Instagram, TikTok, Twitch, Vimeo, Facebook, Bilibili, VK, Dailymotion. Whisper fallback — transcribes audio when subtitles are unavailable (local or OpenAI API). Works with Cursor and other MCP host
Any Browser MCP
Attaches to existing browser sessions using the Chrome DevTools Protocol for automation and interaction.
Open Crawler MCP Server
A web crawler and content extractor that supports multiple output formats like text, markdown, and JSON.
BrowserAct
BrowserAct MCP Server is a standardized MCP service that lets MCP clients connect to the BrowserAct platform to discover and run browser automation workflows, access results/files and related storage, and trigger real-world actions via natural language.