AI Counsel

True deliberative consensus MCP server where AI models debate and refine positions across multiple rounds

AI Counsel

True deliberative consensus MCP server where AI models debate and refine positions across multiple rounds.

License: MIT Python 3.11+ Platform MCP Code style: black

๐ŸŽฌ See It In Action

Cloud Models Debate (Claude Sonnet, GPT-5 Codex, Gemini):

mcp__ai-counsel__deliberate({
  question: "Should we use REST or GraphQL for our new API?",
  participants: [
    {cli: "claude", model: "claude-sonnet-4-5-20250929"},
    {cli: "codex", model: "gpt-5-codex"},
    {cli: "gemini", model: "gemini-2.5-pro"}
  ],
  mode: "conference",
  rounds: 3
})

Result: Converged on hybrid architecture (0.82-0.95 confidence) โ€ข View full transcript

Local Models Debate (100% private, zero API costs):

mcp__ai-counsel__deliberate({
  question: "Should we prioritize code quality or delivery speed?",
  participants: [
    {cli: "ollama", model: "llama3.1:8b"},
    {cli: "ollama", model: "mistral:7b"},
    {cli: "ollama", model: "deepseek-r1:8b"}
  ],
  mode: "conference",
  rounds: 2
})

Result: 2 models switched positions after Round 1 debate โ€ข View full transcript


What Makes This Different

AI Counsel enables TRUE deliberative consensus where models see each other's responses and refine positions across multiple rounds:

  • Models engage in actual debate (see and respond to each other)
  • Multi-round convergence with voting and confidence levels
  • Full audit trail with AI-generated summaries
  • Automatic early stopping when consensus reaches (saves API costs)

Features

  • ๐ŸŽฏ Two Modes: quick (single-round) or conference (multi-round debate)
  • ๐Ÿค– Mixed Adapters: CLI tools (claude, codex, droid, gemini) + HTTP services (ollama, lmstudio, openrouter)
  • โšก Auto-Convergence: Stops when opinions stabilize (saves API costs)
  • ๐Ÿ—ณ๏ธ Structured Voting: Models cast votes with confidence levels and rationale
  • ๐Ÿงฎ Semantic Grouping: Similar vote options automatically merged (0.70+ similarity)
  • ๐ŸŽ›๏ธ Model-Controlled Stopping: Models decide when to stop deliberating
  • ๐Ÿ”ฌ Evidence-Based Deliberation: Models can read files, search code, list files, and run commands to ground decisions in reality
  • ๐Ÿ’ฐ Local Model Support: Zero API costs with Ollama, LM Studio, llamacpp
  • ๐Ÿ” Data Privacy: Keep all data on-premises with self-hosted models
  • ๐Ÿง  Context Injection: Automatically finds similar past debates and injects context for faster convergence
  • ๐Ÿ” Semantic Search: Query past decisions with query_decisions tool (finds contradictions, traces evolution, analyzes patterns)
  • ๐Ÿ›ก๏ธ Fault Tolerant: Individual adapter failures don't halt deliberation
  • ๐Ÿ“ Full Transcripts: Markdown exports with AI-generated summaries

Quick Start

Get up and running in minutes:

  1. Install โ€“ follow the commands in Installation to clone the repo, create a virtualenv, and install requirements.
  2. Configure โ€“ set up your MCP client using the .mcp.json example in Configure in Claude Code.
  3. Run โ€“ start the server with python server.py and trigger the deliberate tool using the examples in Usage.

Try a Deliberation:

// Mix local + cloud models, zero API costs for local models
mcp__ai-counsel__deliberate({
  question: "Should we add unit tests to new features?",
  participants: [
    {cli: "ollama", model: "llama2"},           // Local
    {cli: "lmstudio", model: "mistral"},        // Local
    {cli: "claude", model: "sonnet"}            // Cloud
  ],
  mode: "quick"
})

โš ๏ธ Model Size Matters for Deliberations

Recommended: Use 7B-8B+ parameter models (Llama-3-8B, Mistral-7B, Qwen-2.5-7B) for reliable structured output and vote formatting.

Not Recommended: Models under 3B parameters (e.g., Llama-3.2-1B) may struggle with complex instructions and produce invalid votes.

Available Models: claude (sonnet, opus, haiku), codex (gpt-5-codex), droid, gemini, HTTP adapters (ollama, lmstudio, openrouter). See CLI Model Reference for complete details.

For model choices and picker workflow, see Model Registry & Picker.

Installation

Prerequisites

  1. Python 3.11+: python3 --version
  2. At least one AI tool (optional - HTTP adapters work without CLI):

Setup

git clone https://github.com/blueman82/ai-counsel.git
cd ai-counsel
python3 -m venv .venv
source .venv/bin/activate  # macOS/Linux; Windows: .venv\Scripts\activate
pip install -r requirements.txt
python3 -m pytest tests/unit -v  # Verify installation

โœ… Ready to use! Server includes core dependencies plus optional convergence backends (scikit-learn, sentence-transformers) for best accuracy.

Configuration

Edit config.yaml to configure adapters and settings:

adapters:
  claude:
    type: cli
    command: "claude"
    args: ["-p", "--model", "{model}", "--settings", "{\"disableAllHooks\": true}", "{prompt}"]
    timeout: 300

  ollama:
    type: http
    base_url: "http://localhost:11434"
    timeout: 120
    max_retries: 3

defaults:
  mode: "quick"
  rounds: 2
  max_rounds: 5

Note: Use type: cli for CLI tools and type: http for HTTP adapters (Ollama, LM Studio, OpenRouter).

Model Registry Configuration

Control which models are available for selection in the model registry. Each model can be enabled or disabled without removing its definition:

model_registry:
  claude:
    - id: "claude-sonnet-4-5-20250929"
      label: "Claude Sonnet 4.5"
      tier: "balanced"
      default: true
      enabled: true  # Model is active and available
    - id: "claude-opus-4-20250514"
      label: "Claude Opus 4"
      tier: "premium"
      enabled: false  # Temporarily disabled (cost control, testing, etc.)

Enabled Field Behavior:

  • enabled: true (default) - Model appears in list_models and can be selected for deliberations
  • enabled: false - Model is hidden from selection but definition retained for easy re-enabling
  • Disabled models cannot be used even if explicitly specified in deliberate calls
  • Default model selection skips disabled models automatically

Use Cases:

  • Cost Control: Disable expensive models temporarily without losing configuration
  • Testing: Enable/disable specific models during integration tests
  • Staged Rollout: Configure new models as disabled, enable when ready
  • Performance Tuning: Disable slow models during rapid iteration
  • Compliance: Temporarily restrict models pending approval

Core Features Deep Dive

Convergence Detection & Auto-Stop

Models automatically converge and stop deliberating when opinions stabilize, saving time and API costs. Status: Converged (โ‰ฅ85% similarity), Refining (40-85%), Diverging (<40%), or Impasse (stable disagreement). Voting takes precedence: when models cast votes, convergence reflects voting outcome.

โ†’ Complete Guide - Thresholds, backends, configuration

Structured Voting

Models cast votes with confidence levels (0.0-1.0), rationale, and continue_debate signals. Votes determine consensus: Unanimous (3-0), Majority (2-1), or Tie. Similar options automatically merged at 0.70+ similarity threshold.

โ†’ Complete Guide - Vote structure, examples, integration

HTTP Adapters & Local Models

Run Ollama, LM Studio, or OpenRouter locally for zero API costs and complete data privacy. Mix with cloud models (Claude, GPT-4) in single deliberation.

โ†’ Setup Guides - Ollama, LM Studio, OpenRouter, cost analysis

Extending AI Counsel

Add new CLI tools or HTTP adapters to fit your infrastructure. Simple 3-5 step process with examples and testing patterns.

โ†’ Developer Guide - Step-by-step tutorials, real-world examples

Evidence-Based Deliberation

Ground design decisions in reality by querying actual code, files, and data:

// MCP client example (e.g., Claude Code)
mcp__ai_counsel__deliberate({
  question: "Should we migrate from SQLite to PostgreSQL?",
  participants: [
    {cli: "claude", model: "sonnet"},
    {cli: "codex", model: "gpt-4"}
  ],
  rounds: 3,
  working_directory: process.cwd()  // Required - enables tools to access your files
})

During deliberation, models can:

  • ๐Ÿ“„ Read files: TOOL_REQUEST: {"name": "read_file", "arguments": {"path": "config.yaml"}}
  • ๐Ÿ” Search code: TOOL_REQUEST: {"name": "search_code", "arguments": {"pattern": "database.*connect"}}
  • ๐Ÿ“‹ List files: TOOL_REQUEST: {"name": "list_files", "arguments": {"pattern": "*.sql"}}
  • โš™๏ธ Run commands: TOOL_REQUEST: {"name": "run_command", "arguments": {"command": "git", "args": ["log", "--oneline"]}}

Example workflow:

  1. Model A proposes PostgreSQL based on assumptions
  2. Model B requests: read_file to check current config
  3. Tool returns: database: sqlite, max_connections: 10
  4. Model B searches: search_code for database queries
  5. Tool returns: 50+ queries with complex JOINs
  6. Models converge: "PostgreSQL needed for query complexity and scale"
  7. Decision backed by evidence, not opinion

Benefits:

  • Decisions rooted in current state, not assumptions
  • Applies to code reviews, architecture choices, testing strategy
  • Full audit trail of evidence in transcripts

Supported Tools:

  • read_file - Read file contents (max 1MB)
  • search_code - Search regex patterns (ripgrep or Python fallback)
  • list_files - List files matching glob patterns
  • run_command - Execute safe read-only commands (ls, git, grep, etc.)

Configuration

Control tool behavior in config.yaml:

Working Directory (Required):

  • Set working_directory parameter when calling deliberate tool
  • Tools resolve relative paths from this directory
  • Example: working_directory: process.cwd() in JavaScript MCP clients

Tool Security (deliberation.tool_security):

  • exclude_patterns: Block access to sensitive directories (default: transcripts/, .git/, node_modules/)
  • max_file_size_bytes: File size limit for read_file (default: 1MB)
  • command_whitelist: Safe commands for run_command (ls, grep, find, cat, head, tail)

File Tree (deliberation.file_tree):

  • enabled: Inject repository structure into Round 1 prompts (default: true)
  • max_depth: Directory depth limit (default: 3)
  • max_files: Maximum files to include (default: 100)

Adapter-Specific Requirements:

AdapterWorking Directory BehaviorConfiguration
ClaudeAutomatic isolation via subprocess {working_directory}No special config needed
CodexNo true isolation - can access any fileSecurity consideration: models can read outside {working_directory}
DroidAutomatic isolation via subprocess {working_directory}No special config needed
GeminiEnforces workspace boundariesRequired: --include-directories {working_directory} flag
Ollama/LMStudioN/A - HTTP adaptersNo file system access restrictions

Learn More:

Troubleshooting

"File not found" errors:

  • Ensure working_directory is set correctly in your MCP client call
  • Use discovery pattern: list_files โ†’ read_file
  • Check file paths are relative to working directory

"Access denied: Path matches exclusion pattern":

  • Tools block transcripts/, .git/, node_modules/ by default
  • Customize via deliberation.tool_security.exclude_patterns in config.yaml

Gemini "File path must be within workspace" errors:

  • Verify Gemini's --include-directories flag uses {working_directory} placeholder
  • See adapter-specific setup above

Tool timeout errors:

  • Increase deliberation.tool_security.tool_timeout for slow operations
  • Default: 10 seconds for file operations, 30 seconds for commands

Learn More:

Decision Graph Memory

AI Counsel learns from past deliberations to accelerate future decisions. Two core capabilities:

1. Automatic Context Injection

When starting a new deliberation, the system:

  • Searches past debates for similar questions (semantic similarity)
  • Finds the top-k most relevant decisions (configurable, default: 3)
  • Injects context into Round 1 prompts automatically
  • Result: Models start with institutional knowledge, converge faster

2. Semantic Search with query_decisions

Query past deliberations programmatically:

  • Search similar: Find decisions related to a question
  • Find contradictions: Detect conflicting past decisions
  • Trace evolution: See how opinions changed over time
  • Analyze patterns: Identify recurring themes

Configuration (optional - defaults work out-of-box):

decision_graph:
  enabled: true                       # Auto-injection on by default
  db_path: "decision_graph.db"        # Resolves to project root (works for any user/folder)
  similarity_threshold: 0.6           # Adjust to control context relevance
  max_context_decisions: 3            # How many past decisions to inject

Works for any user from any directory - database path is resolved relative to project root.

โ†’ Quickstart | Configuration | Context Injection

Usage

Start the Server

python server.py

Configure in Claude Code

Option A: Project Config (Recommended) - Create .mcp.json:

{
  "mcpServers": {
    "ai-counsel": {
      "type": "stdio",
      "command": ".venv/bin/python",
      "args": ["server.py"],
      "env": {}
    }
  }
}

Option B: User Config - Add to ~/.claude.json with absolute paths.

After configuration, restart Claude Code.

Model Selection & Session Defaults

  • Discover the allowlisted models for each adapter by running the MCP tool list_models.
  • Set per-session defaults with set_session_models; leave model blank in deliberate to use those defaults.
  • Full instructions and request examples live in Model Registry & Picker.

Examples

Quick Mode:

mcp__ai-counsel__deliberate({
  question: "Should we migrate to TypeScript?",
  participants: [{cli: "claude", model: "sonnet"}, {cli: "codex", model: "gpt-5-codex"}],
  mode: "quick"
})

Conference Mode (multi-round):

mcp__ai-counsel__deliberate({
  question: "JWT vs session-based auth?",
  participants: [
    {cli: "claude", model: "sonnet"},
    {cli: "codex", model: "gpt-5-codex"}
  ],
  rounds: 3,
  mode: "conference"
})

Search Past Decisions:

mcp__ai-counsel__query_decisions({
  query: "database choice",
  operation: "search_similar",
  limit: 5
})
// Returns: Similar past deliberations with consensus and similarity scores

// Find contradictions
mcp__ai-counsel__query_decisions({
  operation: "find_contradictions"
})
// Returns: Decisions where consensus conflicts

// Trace evolution
mcp__ai-counsel__query_decisions({
  query: "microservices architecture",
  operation: "trace_evolution"
})
// Returns: How opinions evolved over time on this topic

Transcripts

All deliberations saved to transcripts/ with AI-generated summaries and full debate history.

Architecture

ai-counsel/
โ”œโ”€โ”€ server.py                # MCP server entry point
โ”œโ”€โ”€ config.yaml              # Configuration
โ”œโ”€โ”€ adapters/                # CLI/HTTP adapters
โ”‚   โ”œโ”€โ”€ base.py             # Abstract base
โ”‚   โ”œโ”€โ”€ base_http.py        # HTTP base
โ”‚   โ””โ”€โ”€ [adapter implementations]
โ”œโ”€โ”€ deliberation/            # Core engine
โ”‚   โ”œโ”€โ”€ engine.py           # Orchestration
โ”‚   โ”œโ”€โ”€ convergence.py      # Similarity detection
โ”‚   โ””โ”€โ”€ transcript.py       # Markdown generation
โ”œโ”€โ”€ models/                  # Data models (Pydantic)
โ”œโ”€โ”€ tests/                   # Unit/integration/e2e tests
โ””โ”€โ”€ decision_graph/         # Optional memory system

Documentation Hub

Getting Started

Core Concepts

Setup & Configuration

Development

Reference

Development

Running Tests

pytest tests/unit -v                    # Unit tests (fast)
pytest tests/integration -v -m integration  # Integration tests
pytest --cov=. --cov-report=html       # Coverage report

See CLAUDE.md for development workflow and architecture notes.

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/your-feature)
  3. Write tests first (TDD workflow)
  4. Implement feature
  5. Ensure all tests pass
  6. Submit PR with clear description

License

MIT License - see LICENSE file

Credits

Built with:

Inspired by the need for true deliberative AI consensus beyond parallel opinion gathering.


Status

GitHub stars GitHub forks GitHub last commit Build Tests Version

Production Ready - Multi-model deliberative consensus with cross-user decision graph memory, structured voting, and adaptive early stopping for critical technical decisions!

Related Servers