ask-gemini-mcp

MCP server that enables AI assistants to interact with Google Gemini CLI

Ask LLM

CI GitHub Release License: MIT

PackageTypeVersionDownloads
ask-gemini-mcpMCP Servernpmdownloads
ask-codex-mcpMCP Servernpmdownloads
ask-ollama-mcpMCP Servernpmdownloads
ask-llm-mcpMCP Servernpmdownloads
@ask-llm/pluginClaude Code PluginGitHub/plugin install

MCP servers + Claude Code plugin for AI-to-AI collaboration

MCP servers that bridge your AI client with multiple LLM providers for AI-to-AI collaboration. Works with Claude Code, Claude Desktop, Cursor, Warp, Copilot, and 40+ other MCP clients. Leverage Gemini's 1M+ token context, Codex's GPT-5.4, or local Ollama models — all via standard MCP.

Why?

  • Get a second opinion — Ask another AI to review your coding approach before committing
  • Debate plans — Send architecture proposals for critique and alternative suggestions
  • Review changes — Have multiple AIs analyze diffs to catch issues your primary AI might miss
  • Massive context — Gemini reads entire codebases (1M+ tokens) that would overflow other models
  • Local & private — Use Ollama for reviews where no data leaves your machine

Quick Start

Claude Code

# All-in-one — auto-detects installed providers
claude mcp add --scope user ask-llm -- npx -y ask-llm-mcp
Or install providers individually
claude mcp add --scope user gemini -- npx -y ask-gemini-mcp
claude mcp add --scope user codex -- npx -y ask-codex-mcp
claude mcp add --scope user ollama -- npx -y ask-ollama-mcp

Claude Desktop

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "ask-llm": {
      "command": "npx",
      "args": ["-y", "ask-llm-mcp"]
    }
  }
}
Or install providers individually
{
  "mcpServers": {
    "gemini": {
      "command": "npx",
      "args": ["-y", "ask-gemini-mcp"]
    },
    "codex": {
      "command": "npx",
      "args": ["-y", "ask-codex-mcp"]
    },
    "ollama": {
      "command": "npx",
      "args": ["-y", "ask-ollama-mcp"]
    }
  }
}
Cursor, Codex CLI, OpenCode, and other clients

Cursor (.cursor/mcp.json):

{
  "mcpServers": {
    "ask-llm": { "command": "npx", "args": ["-y", "ask-llm-mcp"] }
  }
}

Codex CLI (~/.codex/config.toml):

[mcp_servers.ask-llm]
command = "npx"
args = ["-y", "ask-llm-mcp"]

Any MCP Client (STDIO transport):

{ "command": "npx", "args": ["-y", "ask-llm-mcp"] }

Replace ask-llm-mcp with ask-gemini-mcp, ask-codex-mcp, or ask-ollama-mcp for a single provider.

Claude Code Plugin

The Ask LLM plugin adds multi-provider code review, brainstorming, and automated hooks directly into Claude Code:

/plugin marketplace add Lykhoyda/ask-llm
/plugin install ask-llm@ask-llm-plugins

What You Get

FeatureDescription
/multi-reviewParallel Gemini + Codex review with 4-phase validation pipeline and consensus highlighting
/gemini-reviewGemini-only review with confidence filtering
/codex-reviewCodex-only review with confidence filtering
/ollama-reviewLocal review — no data leaves your machine
/brainstormMulti-LLM brainstorm: Claude Opus researches the topic against real files in parallel with external providers (Gemini/Codex/Ollama), then synthesizes all findings with verified findings weighted higher
Pre-commit hookReviews staged changes before git commit, warns about critical issues

The review agents use a 4-phase pipeline inspired by Anthropic's code-review plugin: context gathering, prompt construction with explicit false-positive exclusions, synthesis, and source-level validation of each finding.

See the plugin docs for details.

Prerequisites

  • Node.js v20.0.0 or higher (LTS)
  • At least one provider:
    • Gemini CLInpm install -g @google/gemini-cli && gemini login
    • Codex CLI — installed and authenticated
    • Ollama — running locally with a model pulled (ollama pull qwen2.5-coder:7b)

MCP Tools

ToolPackagePurpose
ask-geminiask-gemini-mcpSend prompts to Gemini CLI with @ file syntax. 1M+ token context
ask-gemini-editask-gemini-mcpGet structured OLD/NEW code edit blocks from Gemini
fetch-chunkask-gemini-mcpRetrieve chunks from cached large responses
ask-codexask-codex-mcpSend prompts to Codex CLI. GPT-5.4 with mini fallback
ask-ollamaask-ollama-mcpSend prompts to local Ollama. Fully private, zero cost
pingallConnection test — verify MCP setup

Usage Examples

ask gemini to review the changes in @src/auth.ts for security issues
ask codex to suggest a better algorithm for @src/sort.ts
ask ollama to explain @src/config.ts (runs locally, no data sent anywhere)
use gemini to summarize @. the current directory

Models

ProviderDefaultFallback
Geminigemini-3.1-pro-previewgemini-3-flash-preview (on quota)
Codexgpt-5.4gpt-5.4-mini (on quota)
Ollamaqwen2.5-coder:7bqwen2.5-coder:1.5b (if not found)

All providers automatically fall back to a lighter model on errors.

Documentation

Contributing

Contributions are welcome! See open issues for things to work on.

License

MIT License. See LICENSE for details.

Disclaimer: This is an unofficial, third-party tool and is not affiliated with, endorsed, or sponsored by Google or OpenAI.

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension