Gemini MCP
Integrate search grounded Gemini output into your workflow.
Gemini MCP Server
A professional, production-ready Model Context Protocol (MCP) server that provides seamless integration with Google's Gemini AI models. Built with TypeScript and designed for enterprise use, this package offers robust error handling, comprehensive logging, and easy deployment.
š Quick Start
The easiest way to get started is using npx
- no installation required:
# Get your API key from Google AI Studio
# https://makersuite.google.com/app/apikey
# Test the server (optional)
npx @houtini/gemini-mcp
# Add to Claude Desktop (see configuration below)
š Table of Contents
- Features
- Installation
- Configuration
- Usage Examples
- API Reference
- Development
- Troubleshooting
- Contributing
⨠Features
Core Functionality
- š¤ Multi-Model Support - Access to 6 Gemini models including the latest Gemini 2.5 Flash
- š¬ Chat Interface - Advanced chat functionality with customisable parameters
- š Google Search Grounding - Real-time web search integration enabled by default for current information
- š Model Information - Detailed model capabilities and specifications
- šļø Fine-Grained Control - Temperature, token limits, and system prompts
Enterprise Features
- šļø Professional Architecture - Modular services-based design
- š”ļø Robust Error Handling - Comprehensive error handling with detailed logging
- š Winston Logging - Production-ready logging with file rotation
- š Security Focused - No hardcoded credentials, environment-based configuration
- š·ļø Full TypeScript - Complete type safety and IntelliSense support
- ā” High Performance - Optimised for minimal latency and resource usage
š¦ Installation
Prerequisites
- Node.js v24.0.0
- Google AI Studio API Key (Get your key here)
Recommended: No Installation Required
The simplest approach uses npx
to run the latest version automatically:
# No installation needed - npx handles everything
npx @houtini/gemini-mcp
Alternative Installation Methods
Global Installation
# Install once, use anywhere
npm install -g @houtini/gemini-mcp
gemini-mcp
Local Project Installation
# Install in your project
npm install @houtini/gemini-mcp
# Run with npx
npx @houtini/gemini-mcp
From Source (Developers)
git clone https://github.com/houtini-ai/gemini-mcp.git
cd gemini-mcp
npm install
npm run build
npm start
āļø Configuration
Step 1: Get Your API Key
Visit Google AI Studio to create your free API key.
Step 2: Configure Claude Desktop
Add this configuration to your Claude Desktop config file:
Windows: %APPDATA%\Claude\claude_desktop_config.json
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
ā Recommended Configuration (using npx)
{
"mcpServers": {
"gemini": {
"command": "npx",
"args": ["@houtini/gemini-mcp"],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
Benefits of this approach:
- ā No global installation required
- ā Always uses the latest version
- ā Cleaner system (no global packages)
- ā Works out of the box
Alternative: Global Installation
{
"mcpServers": {
"gemini": {
"command": "gemini-mcp",
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
Note: Requires npm install -g @houtini/gemini-mcp
first
Alternative: Local Installation
{
"mcpServers": {
"gemini": {
"command": "node",
"args": ["./node_modules/@houtini/gemini-mcp/dist/index.js"],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
Note: Only works if installed locally in the current directory
Step 3: Restart Claude Desktop
After updating the configuration file, restart Claude Desktop to load the new MCP server.
Optional Configuration
You can add additional environment variables for more control:
{
"mcpServers": {
"gemini": {
"command": "npx",
"args": ["@houtini/gemini-mcp"],
"env": {
"GEMINI_API_KEY": "your-api-key-here",
"LOG_LEVEL": "info"
}
}
}
}
Available Environment Variables:
Variable | Default | Description |
---|---|---|
GEMINI_API_KEY | required | Your Google AI Studio API key |
LOG_LEVEL | info | Logging level: debug , info , warn , error |
Using .env File (Development)
For development or testing, create a .env
file:
# Google Gemini Configuration
GEMINI_API_KEY=your-api-key-here
# Logging Configuration (optional)
LOG_LEVEL=info
š” Usage Examples
Basic Chat
Ask Claude to use Gemini:
Can you help me understand quantum computing using Gemini?
Claude will automatically use the gemini_chat
tool to get a response from Gemini.
Creative Writing
Use Gemini to write a short story about artificial intelligence discovering creativity.
Technical Analysis
Can you use Gemini Pro to explain the differences between various machine learning algorithms?
Model Selection
Use Gemini 1.5 Pro to analyse this code and suggest improvements.
Getting Model Information
Show me all available Gemini models and their capabilities.
š Google Search Grounding
This server includes Google Search grounding functionality powered by Google's real-time web search, providing Gemini models with access to current web information. This feature is enabled by default and significantly enhances response accuracy for questions requiring up-to-date information.
⨠Key Benefits
- š Real-time Information - Access to current news, events, stock prices, weather, and developments
- šÆ Factual Accuracy - Reduces AI hallucinations by grounding responses in verified web sources
- š Source Citations - Automatic citation of sources with search queries used
- ā” Seamless Integration - Works transparently without changing your existing workflow
- š§ Smart Search - AI automatically determines when to search based on query content
How Google Search Grounding Works
When you ask a question that benefits from current information, the system:
- Analyses your query to determine if web search would improve the answer
- Generates relevant search queries automatically based on your question
- Performs Google searches using multiple targeted queries
- Processes search results and synthesises information from multiple sources
- Provides enhanced response with inline citations and source links
- Shows search metadata including the actual queries used for transparency
šÆ Perfect For These Use Cases
Current Events & News
What are the latest developments in AI announced this month?
What's happening with the 2025 climate negotiations?
Recent breakthroughs in quantum computing research?
Real-time Data
Current stock prices for major tech companies
Today's weather forecast for London
Latest cryptocurrency market trends
Recent Developments
New software releases and updates this week
Recent scientific discoveries in medicine
Latest policy changes in renewable energy
Fact Checking & Verification
Verify recent statements about climate change
Check the latest statistics on global internet usage
Confirm recent merger and acquisition announcements
šļø Controlling Grounding Behaviour
Default Behaviour: Grounding is enabled by default for optimal results and accuracy.
Disable for Creative Tasks: When you want purely creative or hypothetical responses:
Use Gemini without web search to write a fictional story about dragons in space.
Write a creative poem about imaginary colours that don't exist.
Technical Control: When using the API directly, use the grounding
parameter:
{
"message": "Write a creative story about time travel",
"model": "gemini-2.5-flash",
"grounding": false
}
{
"message": "What are the latest developments in renewable energy?",
"model": "gemini-2.5-flash",
"grounding": true
}
š Understanding Grounded Responses
When grounding is active, responses include:
Source Citations: Links to the websites used for information
Sources: (https://example.com/article1) (https://example.com/article2)
Search Transparency: The actual search queries used
Search queries used: latest AI developments 2025, OpenAI GPT-5 release, Google Gemini updates
Enhanced Accuracy: Information synthesis from multiple authoritative sources rather than relying solely on training data
š§ API Reference
Available Tools
gemini_chat
Chat with Gemini models to generate text responses.
Parameters:
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
message | string | ā | - | The message to send to Gemini |
model | string | ā | "gemini-2.5-flash" | Model to use |
temperature | number | ā | 0.7 | Controls randomness (0.0-1.0) |
max_tokens | integer | ā | 2048 | Maximum tokens in response (1-8192) |
system_prompt | string | ā | - | System instruction to guide the model |
grounding | boolean | ā | true | Enable Google Search grounding for real-time information |
Example:
{
"message": "What are the latest developments in quantum computing?",
"model": "gemini-1.5-pro",
"temperature": 0.5,
"max_tokens": 1000,
"system_prompt": "You are a helpful technology expert. Provide current, factual information with sources where possible.",
"grounding": true
}
gemini_list_models
Retrieve information about all available Gemini models.
Parameters: None required
Example:
{}
Response includes:
- Model names and display names
- Descriptions of each model's strengths
- Recommended use cases
Available Models
Model | Best For | Description |
---|---|---|
gemini-2.5-flash | General use, latest features | Latest Gemini 2.5 Flash - Fast, versatile performance |
gemini-2.0-flash | Speed-optimised tasks | Gemini 2.0 Flash - Fast, efficient model |
gemini-1.5-flash | Quick responses | Gemini 1.5 Flash - Fast, efficient model |
gemini-1.5-pro | Complex reasoning | Gemini 1.5 Pro - Advanced reasoning capabilities |
gemini-pro | Balanced performance | Gemini Pro - Balanced performance for most tasks |
gemini-pro-vision | Multimodal tasks | Gemini Pro Vision - Text and image understanding |
š ļø Development
Building from Source
# Clone the repository
git clone https://github.com/houtini-ai/gemini-mcp.git
cd gemini-mcp
# Install dependencies
npm install
# Build the project
npm run build
# Run in development mode
npm run dev
Scripts
Command | Description |
---|---|
npm run build | Compile TypeScript to JavaScript |
npm run dev | Run in development mode with live reload |
npm start | Run the compiled server |
npm test | Run test suite |
npm run lint | Check code style |
npm run lint:fix | Fix linting issues automatically |
Project Structure
src/
āāā config/ # Configuration management
ā āāā index.ts # Main configuration
ā āāā types.ts # Configuration types
āāā services/ # Core business logic
ā āāā base-service.ts
ā āāā gemini/ # Gemini service implementation
ā āāā index.ts
ā āāā types.ts
āāā tools/ # MCP tool implementations
ā āāā gemini-chat.ts
ā āāā gemini-list-models.ts
āāā utils/ # Utility functions
ā āāā logger.ts # Winston logging setup
ā āāā error-handler.ts
āāā cli.ts # CLI entry point
āāā index.ts # Main server implementation
Architecture
The server follows a clean, layered architecture:
- CLI Layer (
cli.ts
) - Command-line interface - Server Layer (
index.ts
) - MCP protocol handling - Tools Layer (
tools/
) - MCP tool implementations - Service Layer (
services/
) - Business logic and API integration - Utility Layer (
utils/
) - Cross-cutting concerns
š Troubleshooting
Common Issues
"GEMINI_API_KEY environment variable not set"
Solution:
# Make sure your API key is set in the Claude Desktop configuration
# See the Configuration section above
Server not appearing in Claude Desktop
Solutions:
- Restart Claude Desktop after updating configuration
- Check your configuration file path:
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
- Verify JSON syntax - use a JSON validator if needed
- Ensure your API key is valid - test at Google AI Studio
"Module not found" errors with npx
Solutions:
# Clear npx cache and try again
npx --yes @houtini/gemini-mcp
# Or install globally if preferred
npm install -g @houtini/gemini-mcp
Node.js version issues
Solution:
# Check your Node.js version
node --version
# Should be v24.0.0 or higher
# Install latest Node.js from https://nodejs.org
Debug Mode
Enable detailed logging by setting LOG_LEVEL=debug
in your Claude Desktop configuration:
{
"mcpServers": {
"gemini": {
"command": "npx",
"args": ["@houtini/gemini-mcp"],
"env": {
"GEMINI_API_KEY": "your-api-key-here",
"LOG_LEVEL": "debug"
}
}
}
}
Log Files
Logs are written to:
- Console output (visible in Claude Desktop developer tools)
logs/combined.log
- All log levelslogs/error.log
- Error logs only
Testing Your Setup
Test the server with these Claude queries:
- Basic connectivity: "Can you list the available Gemini models?"
- Simple chat: "Use Gemini to explain photosynthesis."
- Advanced features: "Use Gemini 1.5 Pro with temperature 0.9 to write a creative poem about coding."
Performance Tuning
For better performance:
- Adjust token limits based on your use case
- Use appropriate models (Flash for speed, Pro for complex tasks)
- Monitor logs for rate limiting or API issues
- Set reasonable temperature values (0.7 for balanced, 0.3 for focused, 0.9 for creative)
š¤ Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Make your changes and add tests if applicable
- Ensure all tests pass:
npm test
- Lint your code:
npm run lint:fix
- Build the project:
npm run build
- Commit your changes:
git commit -m 'Add amazing feature'
- Push to the branch:
git push origin feature/amazing-feature
- Open a Pull Request
Development Guidelines
- Follow TypeScript best practices
- Add tests for new functionality
- Update documentation as needed
- Use conventional commit messages
- Ensure backwards compatibility
š License
This project is licensed under the MIT License - see the LICENSE file for details.
š Support
- GitHub Issues: Report bugs or request features
- GitHub Discussions: Ask questions or share ideas
š Changelog
v1.0.3 (Latest)
Enhanced Google Search Grounding
- š§ Fixed grounding metadata field name issues for improved reliability
- šÆ Enhanced source citation processing and display
- ā Verified compatibility with latest Google Generative AI SDK (v0.21.0)
- š Comprehensive grounding documentation and usage examples
- š Resolved field naming inconsistencies in grounding response handling
- ā” Improved grounding metadata debugging and error handling
v1.0.2
Google Search Grounding Introduction
- ⨠Added Google Search grounding functionality enabled by default
- š Real-time web search integration for current information and facts
- š Grounding metadata in responses with source citations
- šļø Configurable grounding parameter in chat requests
- šÆ Enhanced accuracy for current events, news, and factual queries
v1.0.0
Initial Release
- Complete Node.js/TypeScript rewrite from Python
- Professional modular architecture with services pattern
- Comprehensive error handling and logging system
- Full MCP protocol compliance
- Support for 6 Gemini models
- NPM package distribution ready
- Enterprise-grade configuration management
- Production-ready build system
Built with ā¤ļø for the Model Context Protocol community
For more information about MCP, visit modelcontextprotocol.io
Related Servers
Context7 HTTP
An MCP server for the Context7 project, providing HTTP streaming and search endpoints for library information without local installation.
Crossref MCP Server
Search and access academic paper metadata from Crossref.
Perplexity
Interacting with Perplexity
Brave Search
An MCP server for the Brave Search API, providing web and local search capabilities via a streaming SSE interface.
Perplexity
Web search using the Perplexity API with automatic model selection based on query intent.
RSS3
Integrates the RSS3 API to query the Open Web.
Local RAG Backend
A local RAG backend powered by Docker Compose, supporting various document formats for search.
Yandex Search MCP Server
Perform real-time web searches using the Yandex Search API.
Haloscan MCP Server
An MCP server for interacting with the Haloscan SEO API.
Langflow Document Q&A Server
A document question-and-answer server powered by Langflow.