mcp-agent-kit
a complete and intuitive SDK for building MCP Servers, MCP Agents, and LLM integrations (OpenAI, Claude, Gemini) with minimal effort. It abstracts all the complexity of the MCP protocol, provides an intelligent agent with automatic model routing, and includes a universal client for external APIs all through a single, simple, and powerful interface. Perfect for chatbots, enterprise automation, internal system integrations, and rapid development of MCP-based ecosystems.
mcp-agent-kit
The easiest way to create MCP servers, AI agents, and chatbots with any LLM
mcp-agent-kit is a TypeScript package that simplifies the creation of:
- 🔌 MCP Servers (Model Context Protocol)
- 🤖 AI Agents with multiple LLM providers
- 🧠 Intelligent Routers for multi-LLM orchestration
- 💬 Chatbots with conversation memory
- 🌐 API Helpers with retry and timeout
Features
- Zero Config: Works out of the box with smart defaults
- Multi-Provider: OpenAI, Anthropic, Gemini, Ollama support
- Type-Safe: Full TypeScript support with autocomplete
- Production Ready: Built-in retry, timeout, and error handling
- Developer Friendly: One-line setup for complex features
- Extensible: Easy to add custom providers and middleware
Installation
npm install mcp-agent-kit
Quick Start
Create an AI Agent (1 line!)
import { createAgent } from "mcp-agent-kit";
const agent = createAgent({ provider: "openai" });
const response = await agent.chat("Hello!");
console.log(response.content);
Create an MCP Server (1 function!)
import { createMCPServer } from "mcp-agent-kit";
const server = createMCPServer({
name: "my-server",
tools: [
{
name: "get_weather",
description: "Get weather for a location",
inputSchema: {
type: "object",
properties: {
location: { type: "string" },
},
},
handler: async ({ location }) => {
return `Weather in ${location}: Sunny, 72°F`;
},
},
],
});
await server.start();
Create a Chatbot with Memory
import { createChatbot, createAgent } from "mcp-agent-kit";
const bot = createChatbot({
agent: createAgent({ provider: "openai" }),
system: "You are a helpful assistant",
maxHistory: 10,
});
await bot.chat("Hi, my name is John");
await bot.chat("What is my name?"); // Remembers context!
Documentation
Table of Contents
AI Agents
Create intelligent agents that work with multiple LLM providers.
Basic Usage
import { createAgent } from "mcp-agent-kit";
const agent = createAgent({
provider: "openai",
model: "gpt-4-turbo-preview",
temperature: 0.7,
maxTokens: 2000,
});
const response = await agent.chat("Explain TypeScript");
console.log(response.content);
Supported Providers
| Provider | Models | API Key Required |
|---|---|---|
| OpenAI | GPT-4, GPT-3.5 | ✅ Yes |
| Anthropic | Claude 3.5, Claude 3 | ✅ Yes |
| Gemini | Gemini 2.0+ | ✅ Yes |
| Ollama | Local models | ❌ No |
With Tools (Function Calling)
const agent = createAgent({
provider: "openai",
tools: [
{
name: "calculate",
description: "Perform calculations",
parameters: {
type: "object",
properties: {
operation: { type: "string", enum: ["add", "subtract"] },
a: { type: "number" },
b: { type: "number" },
},
required: ["operation", "a", "b"],
},
handler: async ({ operation, a, b }) => {
return operation === "add" ? a + b : a - b;
},
},
],
});
const response = await agent.chat("What is 15 + 27?");
With System Prompt
const agent = createAgent({
provider: "anthropic",
system: "You are an expert Python developer. Always provide code examples.",
});
Smart Tool Calling
Smart Tool Calling adds reliability and performance to tool execution with automatic retry, timeout, and caching.
Basic Configuration
const agent = createAgent({
provider: "openai",
toolConfig: {
forceToolUse: true, // Force model to use tools
maxRetries: 3, // Retry up to 3 times on failure
toolTimeout: 30000, // 30 second timeout
onToolNotCalled: "retry", // Action when tool not called
},
tools: [...],
});
With Caching
const agent = createAgent({
provider: "openai",
toolConfig: {
cacheResults: {
enabled: true,
ttl: 300000, // Cache for 5 minutes
maxSize: 100, // Store up to 100 results
},
},
tools: [...],
});
Direct Tool Execution
// Execute a tool directly with retry and caching
const result = await agent.executeTool("get_weather", {
location: "San Francisco, CA",
});
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
forceToolUse | boolean | false | Force the model to use tools when available |
maxRetries | number | 3 | Maximum retry attempts on tool failure |
onToolNotCalled | string | "retry" | Action when tool not called: "retry", "error", "warn", "allow" |
toolTimeout | number | 30000 | Timeout for tool execution (ms) |
cacheResults.enabled | boolean | true | Enable result caching |
cacheResults.ttl | number | 300000 | Cache time-to-live (ms) |
cacheResults.maxSize | number | 100 | Maximum cached results |
debug | boolean | false | Enable debug logging |
Complete Example
const agent = createAgent({
provider: "openai",
model: "gpt-4-turbo-preview",
toolConfig: {
forceToolUse: true,
maxRetries: 3,
onToolNotCalled: "retry",
toolTimeout: 30000,
cacheResults: {
enabled: true,
ttl: 300000,
maxSize: 100,
},
debug: true,
},
tools: [
{
name: "get_weather",
description: "Get current weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string" },
},
required: ["location"],
},
handler: async ({ location }) => {
// Your weather API logic
return { location, temp: 72, condition: "Sunny" };
},
},
],
});
// Use in chat - tools are automatically called
const response = await agent.chat("What's the weather in NYC?");
// Or execute directly with retry and caching
const result = await agent.executeTool("get_weather", {
location: "New York, NY",
});
MCP Servers
Create Model Context Protocol servers to expose tools and resources.
Basic MCP Server
import { createMCPServer } from "mcp-agent-kit";
const server = createMCPServer({
name: "my-mcp-server",
port: 7777,
logLevel: "info",
});
await server.start(); // Starts on stdio by default
With Tools
const server = createMCPServer({
name: "weather-server",
tools: [
{
name: "get_weather",
description: "Get current weather",
inputSchema: {
type: "object",
properties: {
location: { type: "string" },
units: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location"],
},
handler: async ({ location, units = "celsius" }) => {
// Your weather API logic here
return { location, temp: 22, units, condition: "Sunny" };
},
},
],
});
With Resources
const server = createMCPServer({
name: "data-server",
resources: [
{
uri: "config://app-settings",
name: "Application Settings",
description: "Current app configuration",
mimeType: "application/json",
handler: async () => {
return JSON.stringify({ version: "1.0.0", env: "production" });
},
},
],
});
WebSocket Transport
const server = createMCPServer({
name: "ws-server",
port: 8080,
});
await server.start("websocket"); // Use WebSocket instead of stdio
LLM Router
Route requests to different LLMs based on intelligent rules.
Basic Router
import { createLLMRouter } from "mcp-agent-kit";
const router = createLLMRouter({
rules: [
{
when: (input) => input.length < 200,
use: { provider: "openai", model: "gpt-4-turbo-preview" },
},
{
when: (input) => input.includes("code"),
use: { provider: "anthropic", model: "claude-3-5-sonnet-20241022" },
},
{
default: true,
use: { provider: "openai", model: "gpt-4-turbo-preview" },
},
],
});
const response = await router.route("Write a function to sort an array");
With Fallback and Retry
const router = createLLMRouter({
rules: [...],
fallback: {
provider: 'openai',
model: 'gpt-4-turbo-preview'
},
retryAttempts: 3,
logLevel: 'debug'
});
Router Statistics
const stats = router.getStats();
console.log(stats);
// { totalRules: 3, totalAgents: 2, hasFallback: true }
const agents = router.listAgents();
console.log(agents);
// ['openai:gpt-4-turbo-preview', 'anthropic:claude-3-5-sonnet-20241022']
Chatbots
Create conversational AI with automatic memory management.
Basic Chatbot
import { createChatbot, createAgent } from "mcp-agent-kit";
const bot = createChatbot({
agent: createAgent({ provider: "openai" }),
system: "You are a helpful assistant",
maxHistory: 10,
});
await bot.chat("Hi, I am learning TypeScript");
await bot.chat("Can you help me with interfaces?");
await bot.chat("Thanks!");
With Router
const bot = createChatbot({
router: createLLMRouter({ rules: [...] }),
maxHistory: 20
});
Memory Management
// Get conversation history
const history = bot.getHistory();
// Get statistics
const stats = bot.getStats();
console.log(stats);
// {
// messageCount: 6,
// userMessages: 3,
// assistantMessages: 3,
// oldestMessage: Date,
// newestMessage: Date
// }
// Reset conversation
bot.reset();
// Update system prompt
bot.setSystemPrompt("You are now a Python expert");
API Requests
Simplified HTTP requests with automatic retry and timeout.
Basic Request
import { api } from "mcp-agent-kit";
const response = await api.get("https://api.example.com/data");
console.log(response.data);
POST Request
const response = await api.post(
"https://api.example.com/users",
{ name: "John", email: "john@example.com" },
{
name: "create-user",
headers: { "Content-Type": "application/json" },
}
);
With Retry and Timeout
const response = await api.request({
name: "important-request",
url: "https://api.example.com/data",
method: "GET",
timeout: 10000, // 10 seconds
retries: 5, // 5 attempts
query: { page: 1, limit: 10 },
});
All HTTP Methods
await api.get(url, config);
await api.post(url, body, config);
await api.put(url, body, config);
await api.patch(url, body, config);
await api.delete(url, config);
Configuration
Environment Variables
All configuration is optional. Set these environment variables or pass them in code:
# MCP Server
MCP_SERVER_NAME=my-server
MCP_PORT=7777
# Logging
LOG_LEVEL=info # debug | info | warn | error
# LLM API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
OLLAMA_HOST=http://localhost:11434
Using .env File
# .env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
LOG_LEVEL=debug
The package automatically loads .env files using dotenv.
Examples
Check out the /examples directory for complete working examples:
basic-agent.ts- Simple agent usagesmart-tool-calling.ts- Smart tool calling with retry and cachingmcp-server.ts- MCP server with tools and resourcesmcp-server-websocket.ts- MCP server with WebSocketllm-router.ts- Intelligent routing between LLMschatbot-basic.ts- Chatbot with conversation memorychatbot-with-router.ts- Chatbot using routerapi-requests.ts- HTTP requests with retry
Running Examples
# Install dependencies
npm install
# Run an example
npx ts-node examples/basic-agent.ts
API Reference
Agent API
createAgent(config: AgentConfig)
Creates a new AI agent instance.
Parameters:
provider(required): LLM provider - "openai", "anthropic", "gemini", or "ollama"model(optional): Model name (defaults to provider's default)temperature(optional): Sampling temperature 0-2 (default: 0.7)maxTokens(optional): Maximum tokens in response (default: 2000)apiKey(optional): API key (reads from env if not provided)tools(optional): Array of tool definitionssystem(optional): System prompttoolConfig(optional): Smart tool calling configuration
Returns: Agent instance
Methods:
chat(message: string): Promise<AgentResponse>- Send a message and get responseexecuteTool(name: string, params: any): Promise<any>- Execute a tool directly
AgentResponse
Response object from agent.chat():
{
content: string; // Response text
toolCalls?: Array<{ // Tools that were called
name: string;
arguments: any;
}>;
usage?: { // Token usage
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
}
MCP Server API
createMCPServer(config: MCPServerConfig)
Creates a new MCP server instance.
Parameters:
name(optional): Server name (default: from env or "mcp-server")port(optional): Port number (default: 7777)logLevel(optional): Log level - "debug", "info", "warn", "error"tools(optional): Array of tool definitionsresources(optional): Array of resource definitions
Returns: MCP Server instance
Methods:
start(transport?: "stdio" | "websocket"): Promise<void>- Start the server
Router API
createLLMRouter(config: LLMRouterConfig)
Creates a new LLM router instance.
Parameters:
rules(required): Array of routing rulesfallback(optional): Fallback provider configurationretryAttempts(optional): Number of retry attempts (default: 3)logLevel(optional): Log level
Returns: Router instance
Methods:
route(input: string): Promise<AgentResponse>- Route input to appropriate LLMgetStats(): object- Get router statisticslistAgents(): string[]- List all configured agents
Chatbot API
createChatbot(config: ChatbotConfig)
Creates a new chatbot instance with conversation memory.
Parameters:
agentorrouter(required): Agent or router instancesystem(optional): System promptmaxHistory(optional): Maximum messages to keep (default: 10)
Returns: Chatbot instance
Methods:
chat(message: string): Promise<AgentResponse>- Send message with contextgetHistory(): ChatMessage[]- Get conversation historygetStats(): object- Get conversation statisticsreset(): void- Clear conversation historysetSystemPrompt(prompt: string): void- Update system prompt
API Request Helpers
api.request(config: APIRequestConfig)
Make HTTP request with retry and timeout.
Parameters:
name(optional): Request name for loggingurl(required): Request URLmethod(optional): HTTP method (default: "GET")headers(optional): Request headersquery(optional): Query parametersbody(optional): Request bodytimeout(optional): Timeout in ms (default: 30000)retries(optional): Retry attempts (default: 3)
Returns: Promise<APIResponse>
Convenience Methods:
api.get(url, config?)- GET requestapi.post(url, body, config?)- POST requestapi.put(url, body, config?)- PUT requestapi.patch(url, body, config?)- PATCH requestapi.delete(url, config?)- DELETE request
Advanced Usage
Custom Provider
// Coming soon: Plugin system for custom providers
Middleware
// Coming soon: Middleware support for request/response processing
Streaming Responses
// Coming soon: Streaming support for real-time responses
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
MIT © Dominique Kossi
Acknowledgments
- Built with TypeScript
- Uses MCP SDK
- Powered by OpenAI, Anthropic, Google, and Ollama
Support
- Email: houessoudominique@gmail.com
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made by developers, for developers
Related Servers
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
LetzAI
An MCP server for image generation using the LetzAI API.
Remote MCP Server (Authless)
An example of a remote MCP server deployable on Cloudflare Workers without authentication.
SwissArmyHammer
Manage AI prompts as local markdown files.
TrueNAS Middleware MCP Server
Accesses optimized documentation from the TrueNAS middleware repository to understand its codebase and APIs.
MCP-Typescribe
Answers questions about TypeScript APIs using TypeDoc JSON documentation.
MediaWiki MCP Server
Enables LLM clients to interact with any MediaWiki wiki using the Model Context Protocol.
Adobe After Effects MCP
An MCP server that allows AI assistants to interact with Adobe After Effects.
Maven
Tools to query latest Maven dependency information
Dieter Rams
Evaluates product designs against Dieter Rams' 10 principles of good design.
A11y MCP Server
Perform accessibility audits on webpages using the axe-core engine to identify and help fix a11y issues.