Multi Database MCP Server
An MCP server that provides AI assistants with structured access to multiple databases simultaneously.
Multi Database MCP Server
Overview
The DB MCP Server provides a standardized way for AI models to interact with multiple databases simultaneously. Built on the FreePeak/cortex framework, it enables AI assistants to execute SQL queries, manage transactions, explore schemas, and analyze performance across different database systems through a unified interface.
Core Concepts
Multi-Database Support
Unlike traditional database connectors, DB MCP Server can connect to and interact with multiple databases concurrently:
{
"connections": [
{
"id": "mysql1",
"type": "mysql",
"host": "localhost",
"port": 3306,
"name": "db1",
"user": "user1",
"password": "password1"
},
{
"id": "postgres1",
"type": "postgres",
"host": "localhost",
"port": 5432,
"name": "db2",
"user": "user2",
"password": "password2"
},
{
"id": "oracle1",
"type": "oracle",
"host": "localhost",
"port": 1521,
"service_name": "XEPDB1",
"user": "user3",
"password": "password3"
}
]
}
Dynamic Tool Generation
For each connected database, the server automatically generates specialized tools:
// For a database with ID "mysql1", these tools are generated:
query_mysql1 // Execute SQL queries
execute_mysql1 // Run data modification statements
transaction_mysql1 // Manage transactions
schema_mysql1 // Explore database schema
performance_mysql1 // Analyze query performance
Clean Architecture
The server follows Clean Architecture principles with these layers:
- Domain Layer: Core business entities and interfaces
- Repository Layer: Data access implementations
- Use Case Layer: Application business logic
- Delivery Layer: External interfaces (MCP tools)
Features
- Simultaneous Multi-Database Support: Connect to multiple MySQL, PostgreSQL, SQLite, and Oracle databases concurrently
- Lazy Loading Mode: Defer connection establishment until first use - perfect for setups with 10+ databases (enable with
--lazy-loadingflag) - Database-Specific Tool Generation: Auto-creates specialized tools for each connected database
- Clean Architecture: Modular design with clear separation of concerns
- OpenAI Agents SDK Compatibility: Full compatibility for seamless AI assistant integration
- Dynamic Database Tools: Execute queries, run statements, manage transactions, explore schemas, analyze performance
- Unified Interface: Consistent interaction patterns across different database types
- Connection Management: Simple configuration for multiple database connections
- Health Check: Automatic validation of database connectivity on startup
Supported Databases
| Database | Status | Features |
|---|---|---|
| MySQL | ✅ Full Support | Queries, Transactions, Schema Analysis, Performance Insights |
| PostgreSQL | ✅ Full Support (v9.6-17) | Queries, Transactions, Schema Analysis, Performance Insights |
| SQLite | ✅ Full Support | File-based & In-memory databases, SQLCipher encryption support |
| Oracle | ✅ Full Support (10g-23c) | Queries, Transactions, Schema Analysis, RAC, Cloud Wallet, TNS |
| TimescaleDB | ✅ Full Support | Hypertables, Time-Series Queries, Continuous Aggregates, Compression, Retention Policies |
Deployment Options
The DB MCP Server can be deployed in multiple ways to suit different environments and integration needs:
Docker Deployment
# Pull the latest image
docker pull freepeak/db-mcp-server:latest
# Run with mounted config file
docker run -p 9092:9092 \
-v $(pwd)/config.json:/app/my-config.json \
-e TRANSPORT_MODE=sse \
-e CONFIG_PATH=/app/my-config.json \
freepeak/db-mcp-server
Note: Mount to
/app/my-config.jsonas the container has a default file at/app/config.json.
STDIO Mode (IDE Integration)
# Run the server in STDIO mode
./bin/server -t stdio -c config.json
For Cursor IDE integration, add to .cursor/mcp.json:
{
"mcpServers": {
"stdio-db-mcp-server": {
"command": "/path/to/db-mcp-server/server",
"args": ["-t", "stdio", "-c", "/path/to/config.json"]
}
}
}
SSE Mode (Server-Sent Events)
# Default configuration (localhost:9092)
./bin/server -t sse -c config.json
# Custom host and port
./bin/server -t sse -host 0.0.0.0 -port 8080 -c config.json
Client connection endpoint: http://localhost:9092/sse
Source Code Installation
# Clone the repository
git clone https://github.com/FreePeak/db-mcp-server.git
cd db-mcp-server
# Build the server
make build
# Run the server
./bin/server -t sse -c config.json
Configuration
Database Configuration File
Create a config.json file with your database connections:
{
"connections": [
{
"id": "mysql1",
"type": "mysql",
"host": "mysql1",
"port": 3306,
"name": "db1",
"user": "user1",
"password": "password1",
"query_timeout": 60,
"max_open_conns": 20,
"max_idle_conns": 5,
"conn_max_lifetime_seconds": 300,
"conn_max_idle_time_seconds": 60
},
{
"id": "postgres1",
"type": "postgres",
"host": "postgres1",
"port": 5432,
"name": "db1",
"user": "user1",
"password": "password1"
},
{
"id": "sqlite_app",
"type": "sqlite",
"database_path": "./data/app.db",
"journal_mode": "WAL",
"cache_size": 2000,
"read_only": false,
"use_modernc_driver": true,
"query_timeout": 30,
"max_open_conns": 1,
"max_idle_conns": 1
},
{
"id": "sqlite_encrypted",
"type": "sqlite",
"database_path": "./data/secure.db",
"encryption_key": "your-secret-key-here",
"journal_mode": "WAL",
"use_modernc_driver": false
},
{
"id": "sqlite_memory",
"type": "sqlite",
"database_path": ":memory:",
"cache_size": 1000,
"use_modernc_driver": true
}
]
}
Command-Line Options
# Basic syntax
./bin/server -t <transport> -c <config-file>
# SSE transport options
./bin/server -t sse -host <hostname> -port <port> -c <config-file>
# Lazy loading mode (recommended for 10+ databases)
./bin/server -t stdio -c <config-file> --lazy-loading
# Customize log directory (useful for multi-project setups)
./bin/server -t stdio -c <config-file> -log-dir /tmp/db-mcp-logs
# Inline database configuration
./bin/server -t stdio -db-config '{"connections":[...]}'
# Environment variable configuration
export DB_CONFIG='{"connections":[...]}'
./bin/server -t stdio
Available Flags:
-t, -transport: Transport mode (stdioorsse)-c, -config: Path to database configuration file-p, -port: Server port for SSE mode (default: 9092)-h, -host: Server host for SSE mode (default: localhost)-log-level: Log level (debug,info,warn,error)-log-dir: Directory for log files (default:./logsin current directory)-db-config: Inline JSON database configuration
SQLite Configuration Options
When using SQLite databases, you can leverage these additional configuration options:
SQLite Connection Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
database_path | string | Required | Path to SQLite database file or :memory: for in-memory |
encryption_key | string | - | Key for SQLCipher encrypted databases |
read_only | boolean | false | Open database in read-only mode |
cache_size | integer | 2000 | SQLite cache size in pages |
journal_mode | string | "WAL" | Journal mode: DELETE, TRUNCATE, PERSIST, WAL, OFF |
use_modernc_driver | boolean | true | Use modernc.org/sqlite (CGO-free) or mattn/go-sqlite3 |
SQLite Examples
Basic File Database
{
"id": "my_sqlite_db",
"type": "sqlite",
"database_path": "./data/myapp.db",
"journal_mode": "WAL",
"cache_size": 2000
}
Encrypted Database (SQLCipher)
{
"id": "encrypted_db",
"type": "sqlite",
"database_path": "./data/secure.db",
"encryption_key": "your-secret-encryption-key",
"use_modernc_driver": false
}
In-Memory Database
{
"id": "memory_db",
"type": "sqlite",
"database_path": ":memory:",
"cache_size": 1000
}
Read-Only Database
{
"id": "reference_data",
"type": "sqlite",
"database_path": "./data/reference.db",
"read_only": true,
"journal_mode": "DELETE"
}
Oracle Configuration Options
When using Oracle databases, you can leverage these additional configuration options:
Oracle Connection Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
host | string | Required | Oracle database host |
port | integer | 1521 | Oracle listener port |
service_name | string | - | Service name (recommended for RAC) |
sid | string | - | System identifier (legacy, use service_name instead) |
user | string | Required | Database username |
password | string | Required | Database password |
wallet_location | string | - | Path to Oracle Cloud wallet directory |
tns_admin | string | - | Path to directory containing tnsnames.ora |
tns_entry | string | - | Named entry from tnsnames.ora |
edition | string | - | Edition-Based Redefinition edition name |
pooling | boolean | false | Enable driver-level connection pooling |
standby_sessions | boolean | false | Allow queries on standby databases |
nls_lang | string | AMERICAN_AMERICA.AL32UTF8 | Character set configuration |
Oracle Examples
Basic Oracle Connection (Development)
{
"id": "oracle_dev",
"type": "oracle",
"host": "localhost",
"port": 1521,
"service_name": "XEPDB1",
"user": "testuser",
"password": "testpass",
"max_open_conns": 50,
"max_idle_conns": 10,
"conn_max_lifetime_seconds": 1800
}
Oracle with SID (Legacy)
{
"id": "oracle_legacy",
"type": "oracle",
"host": "oracledb.company.com",
"port": 1521,
"sid": "ORCL",
"user": "app_user",
"password": "app_password"
}
Oracle Cloud Autonomous Database (with Wallet)
{
"id": "oracle_cloud",
"type": "oracle",
"user": "ADMIN",
"password": "your-cloud-password",
"wallet_location": "/path/to/wallet_DBNAME",
"service_name": "dbname_high"
}
Oracle RAC (Real Application Clusters)
{
"id": "oracle_rac",
"type": "oracle",
"host": "scan.company.com",
"port": 1521,
"service_name": "production",
"user": "app_user",
"password": "app_password",
"max_open_conns": 100,
"max_idle_conns": 20
}
Oracle with TNS Entry
{
"id": "oracle_tns",
"type": "oracle",
"tns_admin": "/opt/oracle/network/admin",
"tns_entry": "PROD_DB",
"user": "app_user",
"password": "app_password"
}
Oracle with Edition-Based Redefinition
{
"id": "oracle_ebr",
"type": "oracle",
"host": "oracledb.company.com",
"port": 1521,
"service_name": "production",
"user": "app_user",
"password": "app_password",
"edition": "v2_0"
}
Oracle Connection String Priority
When multiple connection methods are configured, the following priority is used:
- TNS Entry (if
tns_entryandtns_adminare configured) - Wallet (if
wallet_locationis configured) - for Oracle Cloud - Standard (host:port/service_name) - default method
Available Tools
For each connected database, DB MCP Server automatically generates these specialized tools:
Query Tools
| Tool Name | Description |
|---|---|
query_<db_id> | Execute SELECT queries and get results as a tabular dataset |
execute_<db_id> | Run data manipulation statements (INSERT, UPDATE, DELETE) |
transaction_<db_id> | Begin, commit, and rollback transactions |
Schema Tools
| Tool Name | Description |
|---|---|
schema_<db_id> | Get information about tables, columns, indexes, and foreign keys |
generate_schema_<db_id> | Generate SQL or code from database schema |
Performance Tools
| Tool Name | Description |
|---|---|
performance_<db_id> | Analyze query performance and get optimization suggestions |
TimescaleDB Tools
For PostgreSQL databases with TimescaleDB extension, these additional specialized tools are available:
| Tool Name | Description |
|---|---|
timescaledb_<db_id> | Perform general TimescaleDB operations |
create_hypertable_<db_id> | Convert a standard table to a TimescaleDB hypertable |
list_hypertables_<db_id> | List all hypertables in the database |
time_series_query_<db_id> | Execute optimized time-series queries with bucketing |
time_series_analyze_<db_id> | Analyze time-series data patterns |
continuous_aggregate_<db_id> | Create materialized views that automatically update |
refresh_continuous_aggregate_<db_id> | Manually refresh continuous aggregates |
For detailed documentation on TimescaleDB tools, see TIMESCALEDB_TOOLS.md.
Examples
Querying Multiple Databases
-- Query the MySQL database
query_mysql1("SELECT * FROM users LIMIT 10")
-- Query the PostgreSQL database in the same context
query_postgres1("SELECT * FROM products WHERE price > 100")
-- Query the SQLite database
query_sqlite_app("SELECT * FROM local_data WHERE created_at > datetime('now', '-1 day')")
-- Query the Oracle database
query_oracle_dev("SELECT * FROM employees WHERE hire_date > SYSDATE - 30")
Managing Transactions
-- Start a transaction
transaction_mysql1("BEGIN")
-- Execute statements within the transaction
execute_mysql1("INSERT INTO orders (customer_id, product_id) VALUES (1, 2)")
execute_mysql1("UPDATE inventory SET stock = stock - 1 WHERE product_id = 2")
-- Commit or rollback
transaction_mysql1("COMMIT")
-- OR
transaction_mysql1("ROLLBACK")
Exploring Database Schema
-- Get all tables in the database
schema_mysql1("tables")
-- Get columns for a specific table
schema_mysql1("columns", "users")
-- Get constraints
schema_mysql1("constraints", "orders")
Working with SQLite-Specific Features
-- Create a table in SQLite
execute_sqlite_app("CREATE TABLE IF NOT EXISTS local_cache (key TEXT PRIMARY KEY, value TEXT, timestamp DATETIME)")
-- Use SQLite-specific date functions
query_sqlite_app("SELECT * FROM events WHERE date(created_at) = date('now')")
-- Query SQLite master table for schema information
query_sqlite_app("SELECT name, sql FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'")
-- Performance optimization with WAL mode
execute_sqlite_app("PRAGMA journal_mode = WAL")
execute_sqlite_app("PRAGMA synchronous = NORMAL")
Working with Oracle-Specific Features
-- Query user tables (excludes system schemas)
query_oracle_dev("SELECT table_name FROM user_tables ORDER BY table_name")
-- Use Oracle-specific date functions
query_oracle_dev("SELECT employee_id, hire_date FROM employees WHERE hire_date >= TRUNC(SYSDATE, 'YEAR')")
-- Oracle sequence operations
execute_oracle_dev("CREATE SEQUENCE emp_seq START WITH 1000 INCREMENT BY 1")
query_oracle_dev("SELECT emp_seq.NEXTVAL FROM DUAL")
-- Oracle-specific data types
query_oracle_dev("SELECT order_id, TO_CHAR(order_date, 'YYYY-MM-DD HH24:MI:SS') FROM orders")
-- Get schema metadata from Oracle data dictionary
query_oracle_dev("SELECT column_name, data_type, nullable FROM user_tab_columns WHERE table_name = 'EMPLOYEES'")
-- Use Oracle analytic functions
query_oracle_dev("SELECT employee_id, salary, RANK() OVER (ORDER BY salary DESC) as salary_rank FROM employees")
Troubleshooting
Common Issues
- Connection Failures: Verify network connectivity and database credentials
- Permission Errors: Ensure the database user has appropriate permissions
- Timeout Issues: Check the
query_timeoutsetting in your configuration
Logs
Enable verbose logging for troubleshooting:
./bin/server -t sse -c config.json -v
Testing
Running Tests
The project includes comprehensive unit and integration tests for all supported databases.
Unit Tests
Run unit tests (no database required):
make test
# or
go test -short ./...
Integration Tests
Integration tests require running database instances. We provide Docker Compose configurations for easy setup.
Test All Databases:
# Start test databases
docker-compose -f docker-compose.test.yml up -d
# Run all integration tests
go test ./... -v
# Stop test databases
docker-compose -f docker-compose.test.yml down -v
Test Oracle Database:
# Start Oracle test environment
./oracle-test.sh start
# Run Oracle tests
./oracle-test.sh test
# or manually
ORACLE_TEST_HOST=localhost go test -v ./pkg/db -run TestOracle
ORACLE_TEST_HOST=localhost go test -v ./pkg/dbtools -run TestOracle
# Stop Oracle test environment
./oracle-test.sh stop
# Full cleanup (removes volumes)
./oracle-test.sh cleanup
Test TimescaleDB:
# Start TimescaleDB test environment
./timescaledb-test.sh start
# Run TimescaleDB tests
TIMESCALEDB_TEST_HOST=localhost go test -v ./pkg/db/timescale ./internal/delivery/mcp
# Stop TimescaleDB test environment
./timescaledb-test.sh stop
Regression Tests
Run comprehensive regression tests across all database types:
# Ensure all test databases are running
docker-compose -f docker-compose.test.yml up -d
./oracle-test.sh start
# Run regression tests
MYSQL_TEST_HOST=localhost \
POSTGRES_TEST_HOST=localhost \
ORACLE_TEST_HOST=localhost \
go test -v ./pkg/db -run TestRegression
# Run connection pooling tests
go test -v ./pkg/db -run TestConnectionPooling
Continuous Integration
All tests run automatically on every pull request via GitHub Actions. The CI pipeline includes:
- Unit Tests: Fast tests that don't require database connections
- Integration Tests: Tests against MySQL, PostgreSQL, SQLite, and Oracle databases
- Regression Tests: Comprehensive tests ensuring backward compatibility
- Linting: Code quality checks with golangci-lint
Contributing
We welcome contributions to the DB MCP Server project! To contribute:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please see our CONTRIBUTING.md file for detailed guidelines.
Testing Your Changes
Before submitting a pull request, please ensure:
- All unit tests pass:
go test -short ./... - Integration tests pass for affected databases
- Code follows the project's style guidelines:
golangci-lint run ./... - New features include appropriate test coverage
License
This project is licensed under the MIT License - see the LICENSE file for details.
Related Servers
StockFlow
Provides real-time stock data and options analysis from Yahoo Finance, enabling market data access, stock analysis, and options strategy evaluation.
BigQuery
Explore, query, and manage data in Google BigQuery.
Alpha Vantage MCP Server
Enables AI agents and applications to access real-time and 20+ years historical financial market data through natural language queries.
RDW MCP Server
Query Dutch RDW vehicle registration data for vehicle information, fuel, and emissions data using the official RDW open data API.
MySQL MCP Server
A MySQL server that connects to a database using environment variables for configuration.
Memory Custom
Extends the MCP Memory server to create and manage a knowledge graph from LLM interactions.
ChromaDB
Provides AI assistants with persistent memory using ChromaDB vector storage.
RentCast
Access property data, valuations, and market statistics using the RentCast API.
USDA Nutrition MCP Server
Access nutrition information for over 600,000 foods from the USDA FoodData Central database.
Mantora
Mantora is a local-first MCP observer: a lightweight UI + proxy for inspecting LLM data access (sessions, tool calls, results) with protective defaults.