mcp-database-server
Production-grade Model Context Protocol (MCP) server for unified SQL database access. Connect multiple databases through a single MCP server with schema discovery, relationship mapping, caching, and safety controls.
@adevguide/mcp-database-server
Production-grade Model Context Protocol (MCP) server for unified SQL database access. Connect multiple databases through a single MCP server with schema discovery, relationship mapping, caching, and safety controls.
- npm: https://www.npmjs.com/package/@adevguide/mcp-database-server
- GitHub: https://github.com/iPraBhu/mcp-database-server
Contents
Features
- Multi-database support: PostgreSQL, MySQL/MariaDB, SQLite, SQL Server, Oracle
- Automatic schema discovery: tables, columns, indexes, foreign keys, relationships
- Persistent schema caching: TTL + versioning, manual refresh, cache stats
- Relationship inference: foreign keys + heuristics
- Query intelligence: tracking, statistics, timeouts
- Join assistance: suggested join paths based on relationship graphs
- Safety controls: read-only mode, allow/deny write operations, secret redaction
- Query optimization: index recommendations, performance profiling, slow query detection
- Performance monitoring: detailed execution analytics, bottleneck identification
- Query rewriting: automated optimization suggestions with performance impact estimates
Why this exists
This project was originally vibe-coded to solve real issues I was facing when wiring LLM tools to multiple SQL databases (consistent connectivity, schema discovery, and safe query execution). It has since been hardened into a reusable MCP server with caching and security defaults.
Architecture
┌─────────────────────────────────────────────────────────┐
│ MCP Client │
│ (Claude Desktop, IDEs, etc.) │
└────────────────┬────────────────────────────────────────┘
│ JSON-RPC over stdio
┌────────────────▼────────────────────────────────────────┐
│ MCP Database Server │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Schema Cache (TTL + Versioning) │ │
│ └──────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Query Tracker (History + Statistics) │ │
│ └──────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Security Layer (Read-only, Operation Controls) │ │
│ └──────────────────────────────────────────────────┘ │
└────┬─────────┬─────────┬──────────┬──────────┬─────────┘
│ │ │ │ │
┌────▼───┐ ┌──▼────┐ ┌──▼─────┐ ┌──▼──────┐ ┌▼────────┐
│Postgres│ │ MySQL │ │ SQLite │ │ MSSQL │ │ Oracle │
└────────┘ └───────┘ └────────┘ └─────────┘ └─────────┘
Supported Databases
| Database | Driver | Status | Notes |
|---|---|---|---|
| PostgreSQL | pg | ✅ Full Support | Includes CockroachDB compatibility |
| MySQL/MariaDB | mysql2 | ✅ Full Support | Includes Amazon Aurora MySQL compatibility |
| SQLite | sql.js | ✅ Full Support | WASM-backed SQLite with file persistence |
| SQL Server | tedious | ✅ Full Support | Microsoft SQL Server / Azure SQL |
| Oracle | oracledb | ⚠️ Stub | Requires Oracle Instant Client |
Installation
Global install (recommended)
npm install -g @adevguide/mcp-database-server
Run:
mcp-database-server --config /absolute/path/to/.mcp-database-server.config
Run via npx (no global install)
npx -y @adevguide/mcp-database-server --config /absolute/path/to/.mcp-database-server.config
Install from source
git clone https://github.com/iPraBhu/mcp-database-server.git
cd mcp-database-server
npm install
npm run build
node dist/index.js --config ./.mcp-database-server.config
Configuration
Create a .mcp-database-server.config file in your project root:
Note: The config file is automatically discovered inside the current project tree. If you don't specify
--config, the tool searches upward from the current directory until it reaches the detected project root (for example a directory containingpackage.jsonor.git). It does not continue past the project root. If you usecredentialCommand, pass--configexplicitly.
{
"databases": [
{
"id": "postgres-main",
"type": "postgres",
"secretRef": "DB_URL_POSTGRES",
"readOnly": true,
"pool": {
"min": 2,
"max": 10,
"idleTimeoutMillis": 30000
},
"introspection": {
"includeViews": true,
"excludeSchemas": ["pg_catalog"]
}
},
{
"id": "mariadb-reporting",
"type": "mysql",
"secretRef": "DB_URL_MARIADB",
"readOnly": true,
"pool": {
"min": 1,
"max": 5
}
},
{
"id": "sqlite-local",
"type": "sqlite",
"path": "./data/app.db"
}
],
"cache": {
"directory": ".sql-mcp-cache",
"ttlMinutes": 10
},
"security": {
"allowWrite": false,
"allowedWriteOperations": ["INSERT", "UPDATE"],
"disableDangerousOperations": true,
"redactSecrets": true
},
"logging": {
"level": "info",
"pretty": false
}
}
Configuration Reference
Database Configuration
Each database in the databases array represents a connection to a SQL database.
Core Properties
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
id | string | ✅ Yes | - | Unique identifier for this database connection. Used in all MCP tool calls. Must be unique across all databases. |
type | enum | ✅ Yes | - | Database system type. Valid values: postgres, mysql, sqlite, mssql, oracle |
url | string | Conditional* | - | Explicit database connection string. Supports environment variable interpolation with ${DB_URL} but is not the recommended secret-handling path. |
secretRef | string | Conditional* | - | Name of an environment variable that contains the full connection string. Resolved from the process environment or a .env file beside the config file. |
credentialCommand | string | Conditional* | - | Shell command that prints the full connection string to stdout at startup. Useful for 1Password, Vault, AWS helpers, or custom secret tooling. Requires launching the server with an explicit --config path. |
path | string | Conditional** | - | Filesystem path to SQLite database file. Required only for type: sqlite. Can be relative or absolute. |
readOnly | boolean | No | true | When true, blocks all write operations (INSERT, UPDATE, DELETE, etc.). Recommended for production safety. |
eagerConnect | boolean | No | false | When true, connects to database immediately at startup (fail-fast). When false, connects on first query (lazy loading). |
* Required for postgres, mysql, mssql, oracle
** Required for sqlite only
Connection String Formats:
PostgreSQL: postgresql://username:password@host:5432/database
MySQL: mysql://username:password@host:3306/database
SQL Server: Server=host,1433;Database=dbname;User Id=user;Password=pass
SQLite: (use path property instead)
Oracle: username/password@host:1521/servicename
Connection Pool Configuration
The pool object controls connection pooling behavior. Improves performance by reusing database connections.
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
min | number | No | 2 | Minimum number of connections to maintain in the pool. Kept alive even when idle. |
max | number | No | 10 | Maximum number of concurrent connections. Do not exceed your database's connection limit. |
idleTimeoutMillis | number | No | 30000 | Time (ms) to keep idle connections alive before closing. Example: 60000 = 1 minute. |
connectionTimeoutMillis | number | No | 10000 | Time (ms) to wait when establishing a connection before timing out. Fail-fast if database is unreachable. |
Recommendations:
- Development:
min: 1,max: 5 - Production (Low Traffic):
min: 2,max: 10 - Production (High Traffic):
min: 5,max: 20
Introspection Configuration
The introspection object controls schema discovery behavior. Determines what database objects are analyzed.
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
includeViews | boolean | No | true | Include database views in schema discovery. Set to false if views cause performance issues. |
includeRoutines | boolean | No | false | Include stored procedures and functions. (Not fully implemented - planned feature) |
maxTables | number | No | unlimited | Limit introspection to first N tables. Useful for databases with 1000+ tables. May result in incomplete relationship discovery. |
includeSchemas | string[] | No | all | Whitelist of schemas to introspect. Only applicable to PostgreSQL and SQL Server. Example: ["public", "app"] |
excludeSchemas | string[] | No | none | Blacklist of schemas to skip. Common values: ["pg_catalog", "information_schema", "sys"] |
Schema vs Database:
- PostgreSQL/SQL Server: Support multiple schemas per database. Use
includeSchemas/excludeSchemas. - MySQL/MariaDB: Schema = database. Use database name in connection string.
- SQLite: Single-file database, no schema concept.
Cache Configuration
Controls schema metadata caching to improve startup performance and reduce database load.
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
directory | string | No | .sql-mcp-cache | Directory path where cached schema files are stored. One JSON file per database. |
ttlMinutes | number | No | 10 | Time-To-Live in minutes. How long cached schema is considered valid before automatic refresh. |
Cache Behavior:
- On Startup: Loads schema from cache if available and not expired
- After TTL Expiry: Next query triggers automatic re-introspection
- Manual Refresh: Use
clear_cachetool orintrospect_schemawithforceRefresh: true - Cache Files: Stored as
{database-id}.json(e.g.,postgres-main.json)
Recommended TTL Values:
- Development:
5minutes (schema changes frequently) - Staging:
30-60minutes - Production (Static):
1440minutes (24 hours) - Production (Active):
60-240minutes (1-4 hours)
Security Configuration
Comprehensive security controls to protect your databases from unauthorized or dangerous operations.
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
allowWrite | boolean | No | false | Master switch for write operations. When false, all writes are blocked across all databases. |
allowedWriteOperations | string[] | No | all | Whitelist of allowed SQL operations when allowWrite: true. Valid values: INSERT, UPDATE, DELETE, CREATE, ALTER, DROP, TRUNCATE, REPLACE, MERGE |
disableDangerousOperations | boolean | No | true | Extra safety layer. When true, blocks DELETE, TRUNCATE, and DROP operations even if writes are allowed. Prevents accidental data loss. |
redactSecrets | boolean | No | true | Redact connection strings, passwords, and similar credentials in logs and returned error messages. |
Security Layers (Evaluated in Order):
- Database-level
readOnly→ Blocks all writes for specific database - Global
allowWrite→ Master switch for all databases disableDangerousOperations→ Blocks DELETE/TRUNCATE/DROP specificallyallowedWriteOperations→ Whitelist of permitted operations
Example Configurations:
// Read-only access (default - safest)
{
"allowWrite": false
}
// Allow INSERT and UPDATE only (no deletes)
{
"allowWrite": true,
"allowedWriteOperations": ["INSERT", "UPDATE"],
"disableDangerousOperations": true
}
// Full write access (development only - dangerous!)
{
"allowWrite": true,
"disableDangerousOperations": false
}
Logging Configuration
Controls log output verbosity and formatting.
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
level | enum | No | info | Log level. Valid values: trace, debug, info, warn, error. Lower levels include higher levels. |
pretty | boolean | No | false | When true, formats logs as human-readable text. When false, outputs structured JSON (better for production log aggregation). |
Log Levels:
trace: Everything (extremely verbose - use for debugging only)debug: Detailed diagnostic informationinfo: General informational messages (recommended for production)warn: Warning messages that don't prevent operationerror: Error messages only
Recommendations:
- Development:
level: "debug",pretty: true - Production:
level: "info",pretty: false - Troubleshooting:
level: "trace",pretty: true
Complete Configuration Example
{
"databases": [
{
"id": "postgres-production",
"type": "postgres",
"url": "${DATABASE_URL}",
"readOnly": true,
"pool": {
"min": 5,
"max": 20,
"idleTimeoutMillis": 60000,
"connectionTimeoutMillis": 5000
},
"introspection": {
"includeViews": true,
"includeRoutines": false,
"excludeSchemas": ["pg_catalog", "information_schema"]
},
"eagerConnect": true
},
{
"id": "mysql-analytics",
"type": "mysql",
"url": "${MYSQL_URL}",
"readOnly": true,
"pool": {
"min": 2,
"max": 10
},
"introspection": {
"includeViews": true,
"maxTables": 100
}
},
{
"id": "sqlite-local",
"type": "sqlite",
"path": "./data/app.db",
"readOnly": true
}
],
"cache": {
"directory": ".sql-mcp-cache",
"ttlMinutes": 60
},
"security": {
"allowWrite": false,
"allowedWriteOperations": ["INSERT", "UPDATE"],
"disableDangerousOperations": true,
"redactSecrets": true
},
"logging": {
"level": "info",
"pretty": false
}
}
Secret Resolution
Recommended approach: secretRef
Keep secrets out of the MCP client config and out of the server config values themselves.
Example Configuration:
{
"databases": [
{
"id": "production-db",
"type": "postgres",
"secretRef": "DATABASE_URL"
}
]
}
The server resolves secretRef from the process environment first, and then from a .env file next to .mcp-database-server.config.
Environment File (.env):
DATABASE_URL=postgresql://user:password@localhost:5432/dbname
DB_URL_MYSQL=mysql://user:password@localhost:3306/dbname
DB_URL_MARIADB=mysql://report_user:[email protected]:3306/reporting
DB_URL_MSSQL=Server=host,1433;Database=db;User Id=sa;Password=pass
Alternative: credentialCommand
{
"databases": [
{
"id": "analytics-db",
"type": "mysql",
"credentialCommand": "op read op://analytics/mysql/url"
}
]
}
The command must print only the connection string to stdout.
For safety, credentialCommand is only allowed when the server is launched with an explicit --config path. Auto-discovered configs cannot execute credential commands.
Still Supported: direct env interpolation
You can still write "url": "${DATABASE_URL}", but secretRef is the cleaner option because it makes the secret source explicit.
Best Practices:
- ✅ Store
.envfile outside version control (add to.gitignore) - ✅ Use different
.envfiles for each environment (dev, staging, prod) - ✅ Never commit credentials to git repositories
- ✅ Use secret management services (AWS Secrets Manager, HashiCorp Vault) in production
Connection String Reference
| Database | Format | Example |
|---|---|---|
| PostgreSQL | postgresql://user:pass@host:port/db | postgresql://admin:secret@localhost:5432/myapp |
| MySQL | mysql://user:pass@host:port/db | mysql://root:password@localhost:3306/myapp |
| MariaDB | mysql://user:pass@host:port/db | mysql://report_user:[email protected]:3306/reporting |
| SQL Server | Server=host,port;Database=db;User Id=user;Password=pass | Server=localhost,1433;Database=myapp;User Id=sa;Password=secret |
| SQLite | Use path property | "path": "./data/app.db" or "path": "/var/db/app.sqlite" |
Additional Parameters:
PostgreSQL:
postgresql://user:pass@host:5432/db?sslmode=require&connect_timeout=10
MySQL:
mysql://user:pass@host:3306/db?charset=utf8mb4&timezone=Z
MariaDB:
mysql://user:pass@host:3306/db?charset=utf8mb4
SQL Server:
Server=host;Database=db;User Id=user;Password=pass;Encrypt=true;TrustServerCertificate=false
MCP Client Integration
Configuration File Locations
| MCP Client | Configuration File Path |
|---|---|
| Claude Desktop (macOS) | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Claude Desktop (Windows) | %APPDATA%\Claude\claude_desktop_config.json |
| Cline (VS Code) | VS Code settings → MCP Servers |
| Other Clients | Refer to client-specific documentation |
Setup Methods
Method 1: Global npm Installation
Configuration:
{
"mcpServers": {
"database": {
"command": "mcp-database-server",
"args": ["--config", "/absolute/path/to/.mcp-database-server.config"]
}
}
}
Method 2: Source Installation
Configuration:
{
"mcpServers": {
"database": {
"command": "node",
"args": [
"/absolute/path/to/mcp-database-server/dist/index.js",
"--config",
"/absolute/path/to/.mcp-database-server.config"
]
}
}
}
Configuration Properties
| Property | Description | Example |
|---|---|---|
command | Executable to run. Use mcp-database-server for npm install, node for source install. | "mcp-database-server" |
args | Array of command-line arguments. First arg is usually --config followed by config file path. | ["--config", "/path/to/config"] |
env | Optional environment variables passed to the server. Prefer secretRef with a local .env file or external secret tooling for DB credentials. | {"APP_ENV": "production"} |
Finding Absolute Paths:
# macOS/Linux
cd /path/to/mcp-database-server
pwd # prints: /Users/username/projects/mcp-database-server
# Windows (PowerShell)
cd C:\path\to\mcp-database-server
$PWD.Path # prints: C:\Users\username\projects\mcp-database-server
Available MCP Tools
This server provides 15 tools for comprehensive database interaction and optimization.
Tool Reference
| Tool | Purpose | Write Access | Cached Data |
|---|---|---|---|
list_databases | List all configured databases with status | No | Uses cache |
introspect_schema | Discover and cache database schema | No | Writes cache |
get_schema | Retrieve cached schema metadata | No | Reads cache |
run_query | Execute SQL queries with safety controls | Conditional* | Updates stats |
export_query | Export large read-only query results to a local file | No | No cache |
explain_query | Analyze query execution plans | No | No cache |
suggest_joins | Get intelligent join path recommendations | No | Uses cache |
clear_cache | Clear schema cache and statistics | No | Clears cache |
cache_status | View cache health and statistics | No | Reads cache |
health_check | Test database connectivity | No | No cache |
analyze_performance | Get detailed performance analytics | No | Uses stats |
suggest_indexes | Analyze queries and recommend indexes | No | Uses stats |
detect_slow_queries | Identify and alert on slow queries | No | Uses stats |
rewrite_query | Suggest optimized query versions | No | Uses cache |
profile_query | Profile query performance with bottlenecks | No | No cache |
* Requires allowWrite: true and respects security settings
1. list_databases
Lists all configured databases with their connection status and cache information.
Input Parameters:
None required.
Response:
[
{
"id": "postgres-main",
"type": "postgres",
"connected": true,
"cached": true,
"cacheAge": 45000,
"version": "abc123"
}
]
Response Fields:
| Field | Type | Description |
|---|---|---|
id | string | Database identifier from configuration |
type | string | Database type (postgres, mysql, sqlite, mssql, oracle) |
connected | boolean | Whether database connection is active |
cached | boolean | Whether schema is currently cached |
cacheAge | number | Age of cached schema in milliseconds (if cached) |
version | string | Cache version hash (if cached) |
2. introspect_schema
Discovers and caches complete database schema including tables, columns, indexes, foreign keys, and relationships.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database identifier to introspect |
forceRefresh | boolean | No | Force re-introspection even if cache is valid (default: false) |
schemaFilter | object | No | Filter which objects to introspect |
schemaFilter.includeSchemas | string[] | No | Only introspect these schemas (PostgreSQL/SQL Server) |
schemaFilter.excludeSchemas | string[] | No | Skip these schemas during introspection |
schemaFilter.includeViews | boolean | No | Include database views (default: true) |
schemaFilter.maxTables | number | No | Limit to first N tables |
Example Request:
{
"dbId": "postgres-main",
"forceRefresh": false,
"schemaFilter": {
"includeSchemas": ["public"],
"excludeSchemas": ["temp"],
"includeViews": true,
"maxTables": 100
}
}
Response:
{
"dbId": "postgres-main",
"version": "a1b2c3d4",
"introspectedAt": "2026-01-26T10:00:00.000Z",
"schemas": [
{
"name": "public",
"tableCount": 15,
"viewCount": 3
}
],
"totalTables": 15,
"totalRelationships": 12
}
3. get_schema
Retrieves detailed schema metadata from cache without querying the database.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database identifier |
schema | string | No | Filter to specific schema name |
table | string | No | Filter to specific table name |
Example Request:
{
"dbId": "postgres-main",
"schema": "public",
"table": "users"
}
Response: Complete schema metadata including tables, columns, data types, indexes, foreign keys, and inferred relationships.
4. run_query
Executes SQL queries with automatic schema caching, relationship annotation, and comprehensive security controls.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database identifier to query |
sql | string | Yes | SQL query to execute |
params | array | No | Parameterized query values (prevents SQL injection) |
limit | number | No | Maximum number of rows to return |
offset | number | No | Row offset for paginated reads. Requires limit. |
maxBytes | number | No | Approximate max serialized bytes for returned rows. |
includeMetadata | boolean | No | Include relationship and query statistics metadata in the response. Default: true. |
trackQuery | boolean | No | Track this query in history and performance analytics. Default: true. |
timeoutMs | number | No | Query timeout in milliseconds |
Example Request:
{
"dbId": "postgres-main",
"sql": "SELECT * FROM users WHERE active = $1 ORDER BY id",
"params": [true],
"limit": 10,
"offset": 0,
"maxBytes": 32768,
"includeMetadata": false,
"trackQuery": false,
"timeoutMs": 5000
}
Response:
{
"rows": [
{"id": 1, "name": "Alice", "email": "[email protected]", "active": true},
{"id": 2, "name": "Bob", "email": "[email protected]", "active": true}
],
"columns": ["id", "name", "email", "active"],
"rowCount": 2,
"executionTimeMs": 15,
"metadata": {
"relationships": [...],
"queryStats": {
"totalQueries": 10,
"avgExecutionTime": 20,
"errorCount": 0
},
"pagination": {
"limit": 10,
"offset": 0,
"hasMore": true,
"nextOffset": 10
},
"responseSize": {
"maxBytes": 32768,
"rowsBytes": 1842,
"rowsTrimmed": false,
"omittedRowCount": 0
}
}
}
For the fastest MariaDB/MySQL read path, set "includeMetadata": false and "trackQuery": false when you only need result rows and do not need relationship annotations, query history, or performance analytics for that request.
Security Controls:
- ✅ Write operations blocked by default (
allowWrite: false) - ✅ Dangerous operations (DELETE, TRUNCATE, DROP) disabled by default
- ✅ Specific operations can be whitelisted via
allowedWriteOperations - ✅ Per-database
readOnlymode
5. explain_query
Retrieves database query execution plan without executing the query.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database identifier |
sql | string | Yes | SQL query to analyze |
params | array | No | Query parameters (for parameterized queries) |
Example Request:
{
"dbId": "postgres-main",
"sql": "SELECT * FROM users JOIN orders ON users.id = orders.user_id WHERE users.active = $1",
"params": [true]
}
Response: Database-native execution plan (format varies by database type).
5a. export_query
Exports large read-only query results to a local file under .sql-mcp-cache/exports.
Execution Strategy:
- MySQL/MariaDB uses adapter-level row streaming to avoid loading the full result set into memory.
- PostgreSQL and SQLite use paged export by rewriting top-level
LIMIT/OFFSETwindows. - SQL Server export requires a future adapter-specific streaming path and will currently fail unless paging rewrite is supported.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database identifier |
sql | string | Yes | Read-only SQL query to export |
params | array | No | Query parameters |
format | string | No | Output format: jsonl or csv (default: jsonl) |
pageSize | number | No | Page size for non-streaming adapters (default: 1000) |
fileName | string | No | Optional output file name written inside the export directory |
timeoutMs | number | No | Query timeout in milliseconds |
Example Request:
{
"dbId": "mariadb-reporting",
"sql": "SELECT id, email, created_at FROM users ORDER BY id",
"format": "jsonl",
"fileName": "users-export.jsonl",
"timeoutMs": 10000
}
Response:
{
"dbId": "mariadb-reporting",
"outputPath": "/absolute/path/to/.sql-mcp-cache/exports/users-export.jsonl",
"format": "jsonl",
"strategy": "stream",
"rowsExported": 250000,
"columns": ["id", "email", "created_at"],
"fileSizeBytes": 18342011,
"executionTimeMs": 8421
}
6. suggest_joins
Analyzes relationship graph to recommend optimal join paths between multiple tables.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database identifier |
tables | string[] | Yes | Array of table names to join (2-10 tables) |
Example Request:
{
"dbId": "postgres-main",
"tables": ["users", "orders", "products"]
}
Response:
[
{
"tables": ["users", "orders", "products"],
"joins": [
{
"fromTable": "users",
"toTable": "orders",
"relationship": {
"type": "one-to-many",
"confidence": 1.0
},
"joinCondition": "users.id = orders.user_id"
},
{
"fromTable": "orders",
"toTable": "products",
"relationship": {
"type": "many-to-one",
"confidence": 1.0
},
"joinCondition": "orders.product_id = products.id"
}
],
"sql": "FROM users JOIN orders ON users.id = orders.user_id JOIN products ON orders.product_id = products.id"
}
]
7. clear_cache
Clears schema cache and query statistics for one or all databases.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | No | Database to clear (omit to clear all) |
Example Request:
{
"dbId": "postgres-main"
}
Response: Confirmation message.
8. cache_status
Retrieves detailed cache statistics and health information.
Input Parameters:
None required.
Response:
{
"directory": ".sql-mcp-cache",
"ttlMinutes": 10,
"databases": [
{
"dbId": "postgres-main",
"cached": true,
"version": "abc123",
"age": 120000,
"expired": false,
"tableCount": 15,
"sizeBytes": 45678
}
]
}
9. health_check
Tests database connectivity and returns status information.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | No | Database to check (omit to check all) |
Response:
{
"databases": [
{
"dbId": "postgres-main",
"healthy": true,
"connected": true,
"version": "PostgreSQL 15.3",
"responseTimeMs": 12
}
]
}
10. analyze_performance
Get comprehensive performance analytics across all queries for a database.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database to analyze |
Response:
{
"totalQueries": 1250,
"slowQueries": 23,
"avgExecutionTime": 45.67,
"p95ExecutionTime": 234.5,
"errorRate": 1.2,
"mostFrequentTables": [
{ "table": "users", "count": 456 },
{ "table": "orders", "count": 234 }
],
"performanceTrend": "improving"
}
11. suggest_indexes
Analyze query patterns and recommend optimal database indexes.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database to analyze |
Response:
[
{
"table": "orders",
"columns": ["customer_id", "order_date"],
"type": "composite",
"reason": "Frequently used in WHERE and JOIN conditions",
"impact": "high"
},
{
"table": "products",
"columns": ["category_id"],
"type": "single",
"reason": "Column category_id is frequently queried",
"impact": "medium"
}
]
12. detect_slow_queries
Identify queries that exceed performance thresholds and provide alerts.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database to analyze |
Response:
[
{
"dbId": "postgres-main",
"queryId": "a1b2c3",
"sql": "SELECT * FROM large_table WHERE slow_column = ?",
"executionTimeMs": 2500,
"thresholdMs": 1000,
"timestamp": "2024-01-27T10:30:00Z",
"frequency": 5,
"recommendations": [
{
"type": "add_index",
"description": "Add index on slow_column for better performance",
"impact": "high",
"effort": "medium"
}
]
}
]
13. rewrite_query
Suggest optimized versions of SQL queries with performance improvements.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database ID |
sql | string | Yes | SQL query to optimize |
Response:
{
"originalQuery": "SELECT * FROM users WHERE active = 1",
"optimizedQuery": "SELECT id, name, email FROM users WHERE active = 1 LIMIT 1000",
"improvements": [
"Removed unnecessary SELECT *",
"Added LIMIT clause to prevent large result sets"
],
"performanceGain": 35,
"confidence": "high"
}
14. profile_query
Profile a specific query's performance with detailed bottleneck analysis.
Input Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
dbId | string | Yes | Database ID |
sql | string | Yes | SQL query to profile |
params | array | No | Query parameters |
Response:
{
"queryId": "def456",
"sql": "SELECT u.name, COUNT(o.id) FROM users u JOIN orders o ON u.id = o.user_id GROUP BY u.id",
"executionTimeMs": 1250,
"rowCount": 5000,
"bottlenecks": [
{
"type": "join",
"severity": "high",
"description": "Nested loop join on large tables",
"estimatedCost": 150
}
],
"recommendations": [
{
"type": "add_index",
"description": "Add index on orders.user_id",
"impact": "high",
"effort": "low"
}
],
"overallScore": 65
}
Resources
The server exposes cached schemas as MCP resources:
- URI:
schema://{dbId} - MIME Type:
application/json - Content: Complete cached schema metadata
Schema Introspection
Automatic Discovery
The server automatically discovers:
- Tables and Views: All user tables and optionally views
- Columns: Name, data type, nullability, defaults, auto-increment
- Indexes: Including primary keys and unique constraints
- Foreign Keys: Explicit relationship metadata
- Relationships: Both explicit and inferred
Relationship Inference
When foreign keys are not defined, the server infers relationships using heuristics:
- Column names matching
{table}_idor{table}Id - Data type compatibility with target primary key
- Confidence scoring for inferred relationships
Caching Strategy
- Memory + Disk: Dual-layer caching for performance
- TTL-based: Configurable time-to-live
- Version Tracking: Content-based versioning (hash)
- Concurrency Safe: Prevents duplicate introspection
- On-Demand Refresh: Manual or automatic refresh
Query Tracking
The server maintains per-database query history:
- Timestamp and SQL text
- Execution time and row count
- Referenced tables (best-effort extraction)
- Error tracking
- Aggregate statistics
Use this data to:
- Monitor query performance
- Identify frequently accessed tables
- Detect query patterns
- Debug issues
Development
# Install dependencies
npm install
# Run in development mode
npm run dev
# Build
npm run build
# Run tests
npm test
# Run tests with coverage
npm run test:coverage
# Lint
npm run lint
# Format code
npm run format
# Type check
npm run typecheck
Project Structure
src/
├── adapters/ # Database adapters
│ ├── base.ts # Base adapter class
│ ├── postgres.ts # PostgreSQL adapter
│ ├── mysql.ts # MySQL adapter
│ ├── sqlite.ts # SQLite adapter
│ ├── mssql.ts # SQL Server adapter
│ ├── oracle.ts # Oracle adapter (stub)
│ └── index.ts # Adapter factory
├── cache.ts # Schema caching
├── config.ts # Configuration loader
├── database-manager.ts # Database orchestration
├── logger.ts # Logging setup
├── mcp-server.ts # MCP server implementation
├── query-tracker.ts # Query history tracking
├── types.ts # TypeScript types
├── utils.ts # Utility functions
└── index.ts # Entry point
Adding New Database Adapters
- Implement the
DatabaseAdapterinterface insrc/adapters/ - Follow the pattern from existing adapters
- Add to adapter factory in
src/adapters/index.ts - Update type definitions if needed
- Add tests
Example:
import { BaseAdapter } from './base.js';
export class CustomAdapter extends BaseAdapter {
async connect(): Promise<void> { /* ... */ }
async disconnect(): Promise<void> { /* ... */ }
async introspect(): Promise<DatabaseSchema> { /* ... */ }
async query(): Promise<QueryResult> { /* ... */ }
async explain(): Promise<ExplainResult> { /* ... */ }
async testConnection(): Promise<boolean> { /* ... */ }
async getVersion(): Promise<string> { /* ... */ }
}
Troubleshooting
Connection Issues
- Verify connection strings and credentials
- Check network connectivity and firewall rules
- Enable debug logging:
"logging": { "level": "debug" } - Use
health_checktool to test connectivity
Cache Issues
- Clear cache: Use
clear_cachetool - Check cache directory permissions
- Verify TTL settings
- Review cache status with
cache_statustool
Performance
- Adjust connection pool settings
- Use
maxTablesto limit introspection scope - Set appropriate cache TTL
- Enable read-only mode when possible
Oracle Setup
The Oracle adapter requires additional setup:
- Install Oracle Instant Client
- Set environment variables (
LD_LIBRARY_PATHorPATH) - Install
oracledbpackage - Implement stub methods in
src/adapters/oracle.ts
Security Considerations
- Always use read-only mode in production unless write access is required
- Use environment variables for credentials, never hardcode
- Enable secret redaction in logs
- Restrict write operations with
allowedWriteOperations - Use connection string encryption where supported
- Regular security audits of configurations
License
MIT
Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
Support
For issues, questions, or feature requests, please open an issue on GitHub.
Похожие серверы
DesktopInsights
Technographics for desktop apps (like builtwith.com)
PostgreSQL MCP Server
A Kotlin-based MCP server for interacting with PostgreSQL databases.
Verodat
Interact with Verodat AI Ready Data platform
CData Tableau CRM Analytics Server
A read-only MCP server for querying live Tableau CRM Analytics data using the CData JDBC Driver.
Opera Omnia
Access a rich collection of JSON datasets for games, storytelling, and bot development from the Opera Omnia project.
Dremio
Integrate Large Language Models (LLMs) with Dremio, a data lakehouse platform.
MySQL MCP Server
Enables secure interaction with MySQL databases, allowing AI assistants to list tables, read data, and execute SQL queries through a controlled interface.
LoL Data MCP Server
Provides real-time, structured access to League of Legends game data, including champions, items, abilities, game mechanics, and patch information.
KuzuMem-MCP
A distributed memory bank MCP tool that stores memories in a KùzuDB graph database, with repository and branch filtering capabilities.
Metabase MCP Server
Interact with Metabase, the open-source business intelligence platform, using Large Language Models.