Unified MCP Client Library
A TypeScript library for integrating MCP with tools like LangChain and Zod, providing helpers for schema conversion and event streaming.
🌐 MCP Client is the open-source way to connect any LLM to any MCP server in TypeScript/Node.js, letting you build custom agents with tool access without closed-source dependencies.
💡 Let developers easily connect any LLM via LangChain.js to tools like web browsing, file operations, 3D modeling, and more.
✨ Key Features
Feature | Description |
---|---|
🔄 Ease of use | Create an MCP-capable agent in just a few lines of TypeScript. |
🤖 LLM Flexibility | Works with any LangChain.js-supported LLM that supports tool calling. |
🌐 HTTP Support | Direct SSE/HTTP connection to MCP servers. |
⚙️ Dynamic Server Selection | Agents select the right MCP server from a pool on the fly. |
🧩 Multi-Server Support | Use multiple MCP servers in one agent. |
🛡️ Tool Restrictions | Restrict unsafe tools like filesystem or network. |
🔧 Custom Agents | Build your own agents with LangChain.js adapter or implement new adapters. |
📊 Observability | Built-in support for Langfuse with dynamic metadata and tag handling. |
🚀 Quick Start
Requirements
- Node.js 22.0.0 or higher
- npm, yarn, or pnpm (examples use pnpm)
Installation
# Install from npm
npm install mcp-use
# LangChain.js and your LLM provider (e.g., OpenAI)
npm install langchain @langchain/openai dotenv
# Optional: Install observability packages for monitoring
npm install langfuse langfuse-langchain # For Langfuse observability
Create a .env
:
OPENAI_API_KEY=your_api_key
Basic Usage
import { ChatOpenAI } from '@langchain/openai'
import { MCPAgent, MCPClient } from 'mcp-use'
import 'dotenv/config'
async function main() {
// 1. Configure MCP servers
const config = {
mcpServers: {
playwright: { command: 'npx', args: ['@playwright/mcp@latest'] }
}
}
const client = MCPClient.fromDict(config)
// 2. Create LLM
const llm = new ChatOpenAI({ modelName: 'gpt-4o' })
// 3. Instantiate agent
const agent = new MCPAgent({ llm, client, maxSteps: 20 })
// 4. Run query
const result = await agent.run('Find the best restaurant in Tokyo using Google Search')
console.log('Result:', result)
}
main().catch(console.error)
🔧 API Methods
MCPAgent Methods
The MCPAgent
class provides several methods for executing queries with different output formats:
run(query: string, maxSteps?: number): Promise<string>
Executes a query and returns the final result as a string.
const result = await agent.run('What tools are available?')
console.log(result)
stream(query: string, maxSteps?: number): AsyncGenerator<AgentStep, string, void>
Yields intermediate steps during execution, providing visibility into the agent's reasoning process.
const stream = agent.stream('Search for restaurants in Tokyo')
for await (const step of stream) {
console.log(`Tool: ${step.action.tool}, Input: ${step.action.toolInput}`)
console.log(`Result: ${step.observation}`)
}
streamEvents(query: string, maxSteps?: number): AsyncGenerator<StreamEvent, void, void>
Yields fine-grained LangChain StreamEvent objects, enabling token-by-token streaming and detailed event tracking.
const eventStream = agent.streamEvents('What is the weather today?')
for await (const event of eventStream) {
// Handle different event types
switch (event.event) {
case 'on_chat_model_stream':
// Token-by-token streaming from the LLM
if (event.data?.chunk?.content) {
process.stdout.write(event.data.chunk.content)
}
break
case 'on_tool_start':
console.log(`\nTool started: ${event.name}`)
break
case 'on_tool_end':
console.log(`Tool completed: ${event.name}`)
break
}
}
Key Differences
run()
: Best for simple queries where you only need the final resultstream()
: Best for debugging and understanding the agent's tool usagestreamEvents()
: Best for real-time UI updates with token-level streaming
🔄 AI SDK Integration
The library provides built-in utilities for integrating with Vercel AI SDK, making it easy to build streaming UIs with React hooks like useCompletion
and useChat
.
Installation
npm install ai @langchain/anthropic
Basic Usage
import { ChatAnthropic } from '@langchain/anthropic'
import { LangChainAdapter } from 'ai'
import { createReadableStreamFromGenerator, MCPAgent, MCPClient, streamEventsToAISDK } from 'mcp-use'
async function createApiHandler() {
const config = {
mcpServers: {
everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
}
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 5 })
return async (request: { prompt: string }) => {
const streamEvents = agent.streamEvents(request.prompt)
const aiSDKStream = streamEventsToAISDK(streamEvents)
const readableStream = createReadableStreamFromGenerator(aiSDKStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
}
}
Enhanced Usage with Tool Visibility
import { streamEventsToAISDKWithTools } from 'mcp-use'
async function createEnhancedApiHandler() {
const config = {
mcpServers: {
everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
}
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 8 })
return async (request: { prompt: string }) => {
const streamEvents = agent.streamEvents(request.prompt)
// Enhanced stream includes tool usage notifications
const enhancedStream = streamEventsToAISDKWithTools(streamEvents)
const readableStream = createReadableStreamFromGenerator(enhancedStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
}
}
Next.js API Route Example
// pages/api/chat.ts or app/api/chat/route.ts
import { ChatAnthropic } from '@langchain/anthropic'
import { LangChainAdapter } from 'ai'
import { createReadableStreamFromGenerator, MCPAgent, MCPClient, streamEventsToAISDK } from 'mcp-use'
export async function POST(req: Request) {
const { prompt } = await req.json()
const config = {
mcpServers: {
everything: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-everything'] }
}
}
const client = new MCPClient(config)
const llm = new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
const agent = new MCPAgent({ llm, client, maxSteps: 10 })
try {
const streamEvents = agent.streamEvents(prompt)
const aiSDKStream = streamEventsToAISDK(streamEvents)
const readableStream = createReadableStreamFromGenerator(aiSDKStream)
return LangChainAdapter.toDataStreamResponse(readableStream)
}
finally {
await client.closeAllSessions()
}
}
Frontend Integration
// components/Chat.tsx
import { useCompletion } from 'ai/react'
export function Chat() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
api: '/api/chat',
})
return (
<div>
<div>{completion}</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Ask me anything..."
/>
</form>
</div>
)
}
Available AI SDK Utilities
streamEventsToAISDK()
: Converts streamEvents to basic text streamstreamEventsToAISDKWithTools()
: Enhanced stream with tool usage notificationscreateReadableStreamFromGenerator()
: Converts async generator to ReadableStream
📊 Observability & Monitoring
mcp-use-ts provides built-in observability support through the ObservabilityManager
, with integration for Langfuse and other observability platforms.
To enable observability simply configure Environment Variables
# .env
LANGFUSE_PUBLIC_KEY=pk-lf-your-public-key
LANGFUSE_SECRET_KEY=sk-lf-your-secret-key
LANGFUSE_HOST=https://cloud.langfuse.com # or your self-hosted instance
Advanced Observability Features
Dynamic Metadata and Tags
// Set custom metadata for the current execution
agent.setMetadata({
userId: 'user123',
sessionId: 'session456',
environment: 'production'
})
// Set tags for better organization
agent.setTags(['production', 'user-query', 'tool-discovery'])
// Run query with metadata and tags
const result = await agent.run('Search for restaurants in Tokyo')
Monitoring Agent Performance
// Stream events for detailed monitoring
const eventStream = agent.streamEvents('Complex multi-step query')
for await (const event of eventStream) {
// Monitor different event types
switch (event.event) {
case 'on_llm_start':
console.log('LLM call started:', event.data)
break
case 'on_tool_start':
console.log('Tool execution started:', event.name, event.data)
break
case 'on_tool_end':
console.log('Tool execution completed:', event.name, event.data)
break
case 'on_chain_end':
console.log('Agent execution completed:', event.data)
break
}
}
Disabling Observability
To disable observability, either remove langfuse env variables or
const agent = new MCPAgent({
llm,
client,
observe: false
})
📂 Configuration File
You can store servers in a JSON file:
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"]
}
}
}
Load it:
import { MCPClient } from 'mcp-use'
const client = MCPClient.fromConfigFile('./mcp-config.json')
📚 Examples
We provide a comprehensive set of examples demonstrating various use cases. All examples are located in the examples/
directory with a dedicated README.
Running Examples
# Install dependencies
npm install
# Run any example
npm run example:airbnb # Search accommodations with Airbnb
npm run example:browser # Browser automation with Playwright
npm run example:chat # Interactive chat with memory
npm run example:stream # Demonstrate streaming methods (stream & streamEvents)
npm run example:stream_events # Comprehensive streamEvents() examples
npm run example:ai_sdk # AI SDK integration with streaming
npm run example:filesystem # File system operations
npm run example:http # HTTP server connection
npm run example:everything # Test MCP functionalities
npm run example:multi # Multiple servers in one session
Example Highlights
- Browser Automation: Control browsers to navigate websites and extract information
- File Operations: Read, write, and manipulate files through MCP
- Multi-Server: Combine multiple MCP servers (Airbnb + Browser) in a single task
- Sandboxed Execution: Run MCP servers in isolated E2B containers
- OAuth Flows: Authenticate with services like Linear using OAuth2
- Streaming Methods: Demonstrate both step-by-step and token-level streaming
- AI SDK Integration: Build streaming UIs with Vercel AI SDK and React hooks
See the examples README for detailed documentation and prerequisites.
🔄 Multi-Server Example
const config = {
mcpServers: {
airbnb: { command: 'npx', args: ['@openbnb/mcp-server-airbnb'] },
playwright: { command: 'npx', args: ['@playwright/mcp@latest'] }
}
}
const client = MCPClient.fromDict(config)
const agent = new MCPAgent({ llm, client, useServerManager: true })
await agent.run('Search Airbnb in Barcelona, then Google restaurants nearby')
🔒 Tool Access Control
const agent = new MCPAgent({
llm,
client,
disallowedTools: ['file_system', 'network']
})
👥 Contributors
📜 License
MIT © Zane
Related Servers
Everything
Reference / test server with prompts, resources, and tools
Code Understanding
Analyzes local and remote GitHub repositories to provide code understanding and context generation, including structure analysis, file identification, and semantic mapping.
MCP Stripe Server
Integrates with Stripe to manage payments, customers, and refunds.
Financial Dashboard with AI Agent Integration
A financial dashboard for monitoring and analyzing investment portfolios with AI-powered insights.
esp-mcp
An MCP server for ESP-IDF workflows, enabling project builds, firmware flashing, and automated issue resolution from build logs.
Remote MCP Server (Authless)
An example of a remote MCP server deployable on Cloudflare Workers, without authentication.
Remote MCP Server (Authless)
An example of a remote MCP server deployable on Cloudflare Workers, without authentication.
Gemini CLI RAG MCP
A RAG-based Q&A server using a vector store built from Gemini CLI documentation.
iTerm
Access and control local iTerm2 terminal sessions.
Remote MCP Server (Authless)
An example of a remote MCP server deployable on Cloudflare Workers, without authentication.