Multi Database MCP Server
An MCP server that provides AI assistants with structured access to multiple databases simultaneously.
Multi Database MCP Server
Overview
The DB MCP Server provides a standardized way for AI models to interact with multiple databases simultaneously. Built on the FreePeak/cortex framework, it enables AI assistants to execute SQL queries, manage transactions, explore schemas, and analyze performance across different database systems through a unified interface.
Core Concepts
Multi-Database Support
Unlike traditional database connectors, DB MCP Server can connect to and interact with multiple databases concurrently:
{
"connections": [
{
"id": "mysql1",
"type": "mysql",
"host": "localhost",
"port": 3306,
"name": "db1",
"user": "user1",
"password": "password1"
},
{
"id": "postgres1",
"type": "postgres",
"host": "localhost",
"port": 5432,
"name": "db2",
"user": "user2",
"password": "password2"
}
]
}
Dynamic Tool Generation
For each connected database, the server automatically generates specialized tools:
// For a database with ID "mysql1", these tools are generated:
query_mysql1 // Execute SQL queries
execute_mysql1 // Run data modification statements
transaction_mysql1 // Manage transactions
schema_mysql1 // Explore database schema
performance_mysql1 // Analyze query performance
Clean Architecture
The server follows Clean Architecture principles with these layers:
- Domain Layer: Core business entities and interfaces
- Repository Layer: Data access implementations
- Use Case Layer: Application business logic
- Delivery Layer: External interfaces (MCP tools)
Features
- Simultaneous Multi-Database Support: Connect to multiple MySQL and PostgreSQL databases concurrently
- Lazy Loading Mode: Defer connection establishment until first use - perfect for setups with 10+ databases (enable with
--lazy-loadingflag) - Database-Specific Tool Generation: Auto-creates specialized tools for each connected database
- Clean Architecture: Modular design with clear separation of concerns
- OpenAI Agents SDK Compatibility: Full compatibility for seamless AI assistant integration
- Dynamic Database Tools: Execute queries, run statements, manage transactions, explore schemas, analyze performance
- Unified Interface: Consistent interaction patterns across different database types
- Connection Management: Simple configuration for multiple database connections
- Health Check: Automatic validation of database connectivity on startup
Supported Databases
| Database | Status | Features |
|---|---|---|
| MySQL | ✅ Full Support | Queries, Transactions, Schema Analysis, Performance Insights |
| PostgreSQL | ✅ Full Support (v9.6-17) | Queries, Transactions, Schema Analysis, Performance Insights |
| SQLite | ✅ Full Support | File-based & In-memory databases, SQLCipher encryption support |
| TimescaleDB | ✅ Full Support | Hypertables, Time-Series Queries, Continuous Aggregates, Compression, Retention Policies |
Deployment Options
The DB MCP Server can be deployed in multiple ways to suit different environments and integration needs:
Docker Deployment
# Pull the latest image
docker pull freepeak/db-mcp-server:latest
# Run with mounted config file
docker run -p 9092:9092 \
-v $(pwd)/config.json:/app/my-config.json \
-e TRANSPORT_MODE=sse \
-e CONFIG_PATH=/app/my-config.json \
freepeak/db-mcp-server
Note: Mount to
/app/my-config.jsonas the container has a default file at/app/config.json.
STDIO Mode (IDE Integration)
# Run the server in STDIO mode
./bin/server -t stdio -c config.json
For Cursor IDE integration, add to .cursor/mcp.json:
{
"mcpServers": {
"stdio-db-mcp-server": {
"command": "/path/to/db-mcp-server/server",
"args": ["-t", "stdio", "-c", "/path/to/config.json"]
}
}
}
SSE Mode (Server-Sent Events)
# Default configuration (localhost:9092)
./bin/server -t sse -c config.json
# Custom host and port
./bin/server -t sse -host 0.0.0.0 -port 8080 -c config.json
Client connection endpoint: http://localhost:9092/sse
Source Code Installation
# Clone the repository
git clone https://github.com/FreePeak/db-mcp-server.git
cd db-mcp-server
# Build the server
make build
# Run the server
./bin/server -t sse -c config.json
Configuration
Database Configuration File
Create a config.json file with your database connections:
{
"connections": [
{
"id": "mysql1",
"type": "mysql",
"host": "mysql1",
"port": 3306,
"name": "db1",
"user": "user1",
"password": "password1",
"query_timeout": 60,
"max_open_conns": 20,
"max_idle_conns": 5,
"conn_max_lifetime_seconds": 300,
"conn_max_idle_time_seconds": 60
},
{
"id": "postgres1",
"type": "postgres",
"host": "postgres1",
"port": 5432,
"name": "db1",
"user": "user1",
"password": "password1"
},
{
"id": "sqlite_app",
"type": "sqlite",
"database_path": "./data/app.db",
"journal_mode": "WAL",
"cache_size": 2000,
"read_only": false,
"use_modernc_driver": true,
"query_timeout": 30,
"max_open_conns": 1,
"max_idle_conns": 1
},
{
"id": "sqlite_encrypted",
"type": "sqlite",
"database_path": "./data/secure.db",
"encryption_key": "your-secret-key-here",
"journal_mode": "WAL",
"use_modernc_driver": false
},
{
"id": "sqlite_memory",
"type": "sqlite",
"database_path": ":memory:",
"cache_size": 1000,
"use_modernc_driver": true
}
]
}
Command-Line Options
# Basic syntax
./bin/server -t <transport> -c <config-file>
# SSE transport options
./bin/server -t sse -host <hostname> -port <port> -c <config-file>
# Lazy loading mode (recommended for 10+ databases)
./bin/server -t stdio -c <config-file> --lazy-loading
# Customize log directory (useful for multi-project setups)
./bin/server -t stdio -c <config-file> -log-dir /tmp/db-mcp-logs
# Inline database configuration
./bin/server -t stdio -db-config '{"connections":[...]}'
# Environment variable configuration
export DB_CONFIG='{"connections":[...]}'
./bin/server -t stdio
Available Flags:
-t, -transport: Transport mode (stdioorsse)-c, -config: Path to database configuration file-p, -port: Server port for SSE mode (default: 9092)-h, -host: Server host for SSE mode (default: localhost)-log-level: Log level (debug,info,warn,error)-log-dir: Directory for log files (default:./logsin current directory)-db-config: Inline JSON database configuration
SQLite Configuration Options
When using SQLite databases, you can leverage these additional configuration options:
SQLite Connection Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
database_path | string | Required | Path to SQLite database file or :memory: for in-memory |
encryption_key | string | - | Key for SQLCipher encrypted databases |
read_only | boolean | false | Open database in read-only mode |
cache_size | integer | 2000 | SQLite cache size in pages |
journal_mode | string | "WAL" | Journal mode: DELETE, TRUNCATE, PERSIST, WAL, OFF |
use_modernc_driver | boolean | true | Use modernc.org/sqlite (CGO-free) or mattn/go-sqlite3 |
SQLite Examples
Basic File Database
{
"id": "my_sqlite_db",
"type": "sqlite",
"database_path": "./data/myapp.db",
"journal_mode": "WAL",
"cache_size": 2000
}
Encrypted Database (SQLCipher)
{
"id": "encrypted_db",
"type": "sqlite",
"database_path": "./data/secure.db",
"encryption_key": "your-secret-encryption-key",
"use_modernc_driver": false
}
In-Memory Database
{
"id": "memory_db",
"type": "sqlite",
"database_path": ":memory:",
"cache_size": 1000
}
Read-Only Database
{
"id": "reference_data",
"type": "sqlite",
"database_path": "./data/reference.db",
"read_only": true,
"journal_mode": "DELETE"
}
Available Tools
For each connected database, DB MCP Server automatically generates these specialized tools:
Query Tools
| Tool Name | Description |
|---|---|
query_<db_id> | Execute SELECT queries and get results as a tabular dataset |
execute_<db_id> | Run data manipulation statements (INSERT, UPDATE, DELETE) |
transaction_<db_id> | Begin, commit, and rollback transactions |
Schema Tools
| Tool Name | Description |
|---|---|
schema_<db_id> | Get information about tables, columns, indexes, and foreign keys |
generate_schema_<db_id> | Generate SQL or code from database schema |
Performance Tools
| Tool Name | Description |
|---|---|
performance_<db_id> | Analyze query performance and get optimization suggestions |
TimescaleDB Tools
For PostgreSQL databases with TimescaleDB extension, these additional specialized tools are available:
| Tool Name | Description |
|---|---|
timescaledb_<db_id> | Perform general TimescaleDB operations |
create_hypertable_<db_id> | Convert a standard table to a TimescaleDB hypertable |
list_hypertables_<db_id> | List all hypertables in the database |
time_series_query_<db_id> | Execute optimized time-series queries with bucketing |
time_series_analyze_<db_id> | Analyze time-series data patterns |
continuous_aggregate_<db_id> | Create materialized views that automatically update |
refresh_continuous_aggregate_<db_id> | Manually refresh continuous aggregates |
For detailed documentation on TimescaleDB tools, see TIMESCALEDB_TOOLS.md.
Examples
Querying Multiple Databases
-- Query the MySQL database
query_mysql1("SELECT * FROM users LIMIT 10")
-- Query the PostgreSQL database in the same context
query_postgres1("SELECT * FROM products WHERE price > 100")
-- Query the SQLite database
query_sqlite_app("SELECT * FROM local_data WHERE created_at > datetime('now', '-1 day')")
Managing Transactions
-- Start a transaction
transaction_mysql1("BEGIN")
-- Execute statements within the transaction
execute_mysql1("INSERT INTO orders (customer_id, product_id) VALUES (1, 2)")
execute_mysql1("UPDATE inventory SET stock = stock - 1 WHERE product_id = 2")
-- Commit or rollback
transaction_mysql1("COMMIT")
-- OR
transaction_mysql1("ROLLBACK")
Exploring Database Schema
-- Get all tables in the database
schema_mysql1("tables")
-- Get columns for a specific table
schema_mysql1("columns", "users")
-- Get constraints
schema_mysql1("constraints", "orders")
Working with SQLite-Specific Features
-- Create a table in SQLite
execute_sqlite_app("CREATE TABLE IF NOT EXISTS local_cache (key TEXT PRIMARY KEY, value TEXT, timestamp DATETIME)")
-- Use SQLite-specific date functions
query_sqlite_app("SELECT * FROM events WHERE date(created_at) = date('now')")
-- Query SQLite master table for schema information
query_sqlite_app("SELECT name, sql FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'")
-- Performance optimization with WAL mode
execute_sqlite_app("PRAGMA journal_mode = WAL")
execute_sqlite_app("PRAGMA synchronous = NORMAL")
Troubleshooting
Common Issues
- Connection Failures: Verify network connectivity and database credentials
- Permission Errors: Ensure the database user has appropriate permissions
- Timeout Issues: Check the
query_timeoutsetting in your configuration
Logs
Enable verbose logging for troubleshooting:
./bin/server -t sse -c config.json -v
Contributing
We welcome contributions to the DB MCP Server project! To contribute:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please see our CONTRIBUTING.md file for detailed guidelines.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Related Servers
pg-aiguide
Postgres skills and documentation to help AI coding tools generate better PostgreSQL code.
MySQL MCP Server
A MySQL database server for AI assistants, enabling full CRUD operations, transaction management, and intelligent rollback.
FHIR MCP Server by CData
A read-only MCP server for FHIR, enabling LLMs to query live FHIR data. Requires the CData JDBC Driver for FHIR.
MariaDB / MySQL
Access and manage MariaDB or MySQL databases using an MCP server.
Fabi Analyst Agent MCP
Fabi MCP is an autonomous agent that handles end-to-end data analysis tasks from natural language requests, automatically discovering data schemas, generating sql or python code, executing queries, and presenting insights.
Elasticsearch
Connects agents to Elasticsearch data, enabling natural language interaction with indices.
Loki MCP Server
An MCP server for querying logs from Grafana Loki.
MCP Helius
Access Solana blockchain data using the Helius API.
DART-MCP
Perform financial analysis using the DART API and Claude.
Federal Reserve Economic Data
Access financial datasets from the Federal Reserve Economic Data (FRED) API.