A Ruby gem for integrating Large Language Models (LLMs) via the Model Context Protocol (MCP) into development workflows.
A Ruby gem that exposes Large Language Models (LLMs) via the Model Context Protocol (MCP), enabling seamless integration of AI capabilities into your development workflow.
llm-mcp creates an MCP server that provides standardized access to various LLM providers (OpenAI, Google Gemini, and OpenAI-compatible APIs) while supporting advanced features like session management, conversation persistence, and integration with external MCP tools.
Add this line to your application's Gemfile:
gem 'llm-mcp'
And then execute:
$ bundle install
Or install it yourself as:
$ gem install llm-mcp
Set up your API keys based on the provider you want to use:
# For OpenAI
export OPENAI_API_KEY="your-openai-api-key"
# For Google Gemini
export GEMINI_API_KEY="your-gemini-api-key"
# or
export GOOGLE_API_KEY="your-google-api-key"
Start an MCP server that exposes an LLM:
# Using OpenAI
llm-mcp mcp-serve --provider openai --model gpt-4
# Using Google Gemini
llm-mcp mcp-serve --provider google --model gemini-1.5-flash
# Using a custom OpenAI-compatible API
llm-mcp mcp-serve --provider openai --model llama-3.1-8b --base-url https://api.groq.com/openai/v1
llm-mcp mcp-serve \
--provider openai \
--model gpt-4 \
--verbose \ # Enable verbose logging
--json-log-path logs/llm.json \ # Log to JSON file
--session-id my-project \ # Resume a specific session
--session-path ~/my-sessions \ # Custom session storage location
--append-system-prompt "You are a Ruby expert" \ # Add to system prompt
--skip-model-validation # Skip model name validation
llm-mcp can connect to other MCP servers, allowing the LLM to use their tools:
~/.mcp/config.json
):{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["@modelcontextprotocol/server-filesystem", "/tmp"]
},
"github": {
"command": "mcp-github",
"env": {
"GITHUB_TOKEN": "your-github-token"
}
},
"http-api": {
"url": "https://api.example.com/mcp/sse",
"transport": "sse",
"headers": {
"Authorization": "Bearer your-token"
}
}
}
}
llm-mcp mcp-serve \
--provider openai \
--model gpt-4 \
--mcp-config ~/.mcp/config.json
Now the LLM can use tools from the connected MCP servers in its responses!
task
Send a request to the LLM and get a response.
Parameters:
prompt
(required): The message or question for the LLMtemperature
(optional): Control randomness (0.0-2.0, default: 0.7)max_tokens
(optional): Maximum response lengthExample Request:
{
"method": "tools/call",
"params": {
"name": "task",
"arguments": {
"prompt": "Explain the concept of dependency injection",
"temperature": 0.7,
"max_tokens": 500
}
}
}
reset_session
Clear the conversation history and start fresh.
Example Request:
{
"method": "tools/call",
"params": {
"name": "reset_session",
"arguments": {}
}
}
Sessions automatically persist conversations to disk, allowing you to:
Sessions are stored in ~/.llm-mcp/sessions/
by default, with each session saved as a JSON file.
Session files contain:
Enable JSON logging for comprehensive debugging:
llm-mcp mcp-serve \
--provider openai \
--model gpt-4 \
--json-log-path logs/llm.json \
--verbose
Logs include:
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json
):
{
"mcpServers": {
"llm-mcp": {
"command": "llm-mcp",
"args": ["mcp-serve", "--provider", "openai", "--model", "gpt-4"],
"env": {
"OPENAI_API_KEY": "your-api-key"
}
}
}
}
require 'mcp-client'
client = MCP::Client.new
client.connect_stdio('llm-mcp', 'mcp-serve', '--provider', 'openai', '--model', 'gpt-4')
# Use the task tool
response = client.call_tool('task', {
prompt: "Write a haiku about Ruby programming",
temperature: 0.9
})
puts response.content
Create a powerful AI assistant by combining llm-mcp with other MCP servers:
{
"mcpServers": {
"llm": {
"command": "llm-mcp",
"args": ["mcp-serve", "--provider", "openai", "--model", "gpt-4", "--mcp-config", "mcp-tools.json"]
},
"filesystem": {
"command": "mcp-filesystem",
"args": ["/project"]
},
"git": {
"command": "mcp-git"
}
}
}
After checking out the repo, run bin/setup
to install dependencies. Then, run rake test
to run the tests.
# Install dependencies
bundle install
# Run tests
bundle exec rake test
# Run linter
bundle exec rubocop -A
# Install gem locally
bundle exec rake install
Bug reports and pull requests are welcome on GitHub at https://github.com/parruda/llm-mcp.
The gem is available as open source under the terms of the MIT License.
Up-to-date documentation for your coding agent. Covers 1000s of public repos and sites. Built by ref.tools
Enable AI agents to interact with the Atla API for state-of-the-art LLMJ evaluation.
Enable AI Agents to fix Playwright test failures reported to Currents.
ALAPI MCP Tools,Call hundreds of API interfaces via MCP
Interact with your crash reporting and real using monitoring data on your Raygun account
A Python server providing Retrieval-Augmented Generation (RAG) functionality. It indexes various document formats and requires a PostgreSQL database with pgvector.
Access DevRev's APIs to manage work items, parts, search, and user information.
Connects to the React Native application debugger to retrieve console logs from Metro.
Manage Buildkite pipelines and builds.
MCP Server that exposes Creatify AI API capabilities for AI video generation, including avatar videos, URL-to-video conversion, text-to-speech, and AI-powered editing tools.