mcp-database-server

Production-grade Model Context Protocol (MCP) server for unified SQL database access. Connect multiple databases through a single MCP server with schema discovery, relationship mapping, caching, and safety controls.

@adevguide/mcp-database-server

npm version npm downloads

Production-grade Model Context Protocol (MCP) server for unified SQL database access. Connect multiple databases through a single MCP server with schema discovery, relationship mapping, caching, and safety controls.

Contents

Features

  • Multi-database support: PostgreSQL, MySQL/MariaDB, SQLite, SQL Server, Oracle
  • Automatic schema discovery: tables, columns, indexes, foreign keys, relationships
  • Persistent schema caching: TTL + versioning, manual refresh, cache stats
  • Relationship inference: foreign keys + heuristics
  • Query intelligence: tracking, statistics, timeouts
  • Join assistance: suggested join paths based on relationship graphs
  • Safety controls: read-only mode, allow/deny write operations, secret redaction
  • Query optimization: index recommendations, performance profiling, slow query detection
  • Performance monitoring: detailed execution analytics, bottleneck identification
  • Query rewriting: automated optimization suggestions with performance impact estimates

Why this exists

This project was originally vibe-coded to solve real issues I was facing when wiring LLM tools to multiple SQL databases (consistent connectivity, schema discovery, and safe query execution). It has since been hardened into a reusable MCP server with caching and security defaults.

Architecture

┌─────────────────────────────────────────────────────────┐
│                    MCP Client                            │
│            (Claude Desktop, IDEs, etc.)                  │
└────────────────┬────────────────────────────────────────┘
                 │ JSON-RPC over stdio
┌────────────────▼────────────────────────────────────────┐
│                MCP Database Server                       │
│  ┌──────────────────────────────────────────────────┐   │
│  │           Schema Cache (TTL + Versioning)        │   │
│  └──────────────────────────────────────────────────┘   │
│  ┌──────────────────────────────────────────────────┐   │
│  │  Query Tracker (History + Statistics)            │   │
│  └──────────────────────────────────────────────────┘   │
│  ┌──────────────────────────────────────────────────┐   │
│  │  Security Layer (Read-only, Operation Controls)  │   │
│  └──────────────────────────────────────────────────┘   │
└────┬─────────┬─────────┬──────────┬──────────┬─────────┘
     │         │         │          │          │
┌────▼───┐ ┌──▼────┐ ┌──▼─────┐ ┌──▼──────┐ ┌▼────────┐
│Postgres│ │ MySQL │ │ SQLite │ │ MSSQL   │ │ Oracle  │
└────────┘ └───────┘ └────────┘ └─────────┘ └─────────┘

Supported Databases

DatabaseDriverStatusNotes
PostgreSQLpg✅ Full SupportIncludes CockroachDB compatibility
MySQL/MariaDBmysql2✅ Full SupportIncludes Amazon Aurora MySQL compatibility
SQLitesql.js✅ Full SupportWASM-backed SQLite with file persistence
SQL Servertedious✅ Full SupportMicrosoft SQL Server / Azure SQL
Oracleoracledb⚠️ StubRequires Oracle Instant Client

Installation

Global install (recommended)

npm install -g @adevguide/mcp-database-server

Run:

mcp-database-server --config /absolute/path/to/.mcp-database-server.config

Run via npx (no global install)

npx -y @adevguide/mcp-database-server --config /absolute/path/to/.mcp-database-server.config

Install from source

git clone https://github.com/iPraBhu/mcp-database-server.git
cd mcp-database-server
npm install
npm run build
node dist/index.js --config ./.mcp-database-server.config

Configuration

Create a .mcp-database-server.config file in your project root:

Note: The config file is automatically discovered inside the current project tree. If you don't specify --config, the tool searches upward from the current directory until it reaches the detected project root (for example a directory containing package.json or .git). It does not continue past the project root. If you use credentialCommand, pass --config explicitly.

{
  "databases": [
    {
      "id": "postgres-main",
      "type": "postgres",
      "secretRef": "DB_URL_POSTGRES",
      "readOnly": true,
      "pool": {
        "min": 2,
        "max": 10,
        "idleTimeoutMillis": 30000
      },
      "introspection": {
        "includeViews": true,
        "excludeSchemas": ["pg_catalog"]
      }
    },
    {
      "id": "mariadb-reporting",
      "type": "mysql",
      "secretRef": "DB_URL_MARIADB",
      "readOnly": true,
      "pool": {
        "min": 1,
        "max": 5
      }
    },
    {
      "id": "sqlite-local",
      "type": "sqlite",
      "path": "./data/app.db"
    }
  ],
  "cache": {
    "directory": ".sql-mcp-cache",
    "ttlMinutes": 10
  },
  "security": {
    "allowWrite": false,
    "allowedWriteOperations": ["INSERT", "UPDATE"],
    "disableDangerousOperations": true,
    "redactSecrets": true
  },
  "logging": {
    "level": "info",
    "pretty": false
  }
}

Configuration Reference

Database Configuration

Each database in the databases array represents a connection to a SQL database.

Core Properties
PropertyTypeRequiredDefaultDescription
idstring✅ Yes-Unique identifier for this database connection. Used in all MCP tool calls. Must be unique across all databases.
typeenum✅ Yes-Database system type. Valid values: postgres, mysql, sqlite, mssql, oracle
urlstringConditional*-Explicit database connection string. Supports environment variable interpolation with ${DB_URL} but is not the recommended secret-handling path.
secretRefstringConditional*-Name of an environment variable that contains the full connection string. Resolved from the process environment or a .env file beside the config file.
credentialCommandstringConditional*-Shell command that prints the full connection string to stdout at startup. Useful for 1Password, Vault, AWS helpers, or custom secret tooling. Requires launching the server with an explicit --config path.
pathstringConditional**-Filesystem path to SQLite database file. Required only for type: sqlite. Can be relative or absolute.
readOnlybooleanNotrueWhen true, blocks all write operations (INSERT, UPDATE, DELETE, etc.). Recommended for production safety.
eagerConnectbooleanNofalseWhen true, connects to database immediately at startup (fail-fast). When false, connects on first query (lazy loading).

* Required for postgres, mysql, mssql, oracle
** Required for sqlite only

Connection String Formats:

PostgreSQL:  postgresql://username:password@host:5432/database
MySQL:       mysql://username:password@host:3306/database  
SQL Server:  Server=host,1433;Database=dbname;User Id=user;Password=pass
SQLite:      (use path property instead)
Oracle:      username/password@host:1521/servicename
Connection Pool Configuration

The pool object controls connection pooling behavior. Improves performance by reusing database connections.

PropertyTypeRequiredDefaultDescription
minnumberNo2Minimum number of connections to maintain in the pool. Kept alive even when idle.
maxnumberNo10Maximum number of concurrent connections. Do not exceed your database's connection limit.
idleTimeoutMillisnumberNo30000Time (ms) to keep idle connections alive before closing. Example: 60000 = 1 minute.
connectionTimeoutMillisnumberNo10000Time (ms) to wait when establishing a connection before timing out. Fail-fast if database is unreachable.

Recommendations:

  • Development: min: 1, max: 5
  • Production (Low Traffic): min: 2, max: 10
  • Production (High Traffic): min: 5, max: 20
Introspection Configuration

The introspection object controls schema discovery behavior. Determines what database objects are analyzed.

PropertyTypeRequiredDefaultDescription
includeViewsbooleanNotrueInclude database views in schema discovery. Set to false if views cause performance issues.
includeRoutinesbooleanNofalseInclude stored procedures and functions. (Not fully implemented - planned feature)
maxTablesnumberNounlimitedLimit introspection to first N tables. Useful for databases with 1000+ tables. May result in incomplete relationship discovery.
includeSchemasstring[]NoallWhitelist of schemas to introspect. Only applicable to PostgreSQL and SQL Server. Example: ["public", "app"]
excludeSchemasstring[]NononeBlacklist of schemas to skip. Common values: ["pg_catalog", "information_schema", "sys"]

Schema vs Database:

  • PostgreSQL/SQL Server: Support multiple schemas per database. Use includeSchemas/excludeSchemas.
  • MySQL/MariaDB: Schema = database. Use database name in connection string.
  • SQLite: Single-file database, no schema concept.

Cache Configuration

Controls schema metadata caching to improve startup performance and reduce database load.

PropertyTypeRequiredDefaultDescription
directorystringNo.sql-mcp-cacheDirectory path where cached schema files are stored. One JSON file per database.
ttlMinutesnumberNo10Time-To-Live in minutes. How long cached schema is considered valid before automatic refresh.

Cache Behavior:

  • On Startup: Loads schema from cache if available and not expired
  • After TTL Expiry: Next query triggers automatic re-introspection
  • Manual Refresh: Use clear_cache tool or introspect_schema with forceRefresh: true
  • Cache Files: Stored as {database-id}.json (e.g., postgres-main.json)

Recommended TTL Values:

  • Development: 5 minutes (schema changes frequently)
  • Staging: 30-60 minutes
  • Production (Static): 1440 minutes (24 hours)
  • Production (Active): 60-240 minutes (1-4 hours)

Security Configuration

Comprehensive security controls to protect your databases from unauthorized or dangerous operations.

PropertyTypeRequiredDefaultDescription
allowWritebooleanNofalseMaster switch for write operations. When false, all writes are blocked across all databases.
allowedWriteOperationsstring[]NoallWhitelist of allowed SQL operations when allowWrite: true. Valid values: INSERT, UPDATE, DELETE, CREATE, ALTER, DROP, TRUNCATE, REPLACE, MERGE
disableDangerousOperationsbooleanNotrueExtra safety layer. When true, blocks DELETE, TRUNCATE, and DROP operations even if writes are allowed. Prevents accidental data loss.
redactSecretsbooleanNotrueRedact connection strings, passwords, and similar credentials in logs and returned error messages.

Security Layers (Evaluated in Order):

  1. Database-level readOnly → Blocks all writes for specific database
  2. Global allowWrite → Master switch for all databases
  3. disableDangerousOperations → Blocks DELETE/TRUNCATE/DROP specifically
  4. allowedWriteOperations → Whitelist of permitted operations

Example Configurations:

// Read-only access (default - safest)
{
  "allowWrite": false
}

// Allow INSERT and UPDATE only (no deletes)
{
  "allowWrite": true,
  "allowedWriteOperations": ["INSERT", "UPDATE"],
  "disableDangerousOperations": true
}

// Full write access (development only - dangerous!)
{
  "allowWrite": true,
  "disableDangerousOperations": false
}

Logging Configuration

Controls log output verbosity and formatting.

PropertyTypeRequiredDefaultDescription
levelenumNoinfoLog level. Valid values: trace, debug, info, warn, error. Lower levels include higher levels.
prettybooleanNofalseWhen true, formats logs as human-readable text. When false, outputs structured JSON (better for production log aggregation).

Log Levels:

  • trace: Everything (extremely verbose - use for debugging only)
  • debug: Detailed diagnostic information
  • info: General informational messages (recommended for production)
  • warn: Warning messages that don't prevent operation
  • error: Error messages only

Recommendations:

  • Development: level: "debug", pretty: true
  • Production: level: "info", pretty: false
  • Troubleshooting: level: "trace", pretty: true

Complete Configuration Example

{
  "databases": [
    {
      "id": "postgres-production",
      "type": "postgres",
      "url": "${DATABASE_URL}",
      "readOnly": true,
      "pool": {
        "min": 5,
        "max": 20,
        "idleTimeoutMillis": 60000,
        "connectionTimeoutMillis": 5000
      },
      "introspection": {
        "includeViews": true,
        "includeRoutines": false,
        "excludeSchemas": ["pg_catalog", "information_schema"]
      },
      "eagerConnect": true
    },
    {
      "id": "mysql-analytics",
      "type": "mysql",
      "url": "${MYSQL_URL}",
      "readOnly": true,
      "pool": {
        "min": 2,
        "max": 10
      },
      "introspection": {
        "includeViews": true,
        "maxTables": 100
      }
    },
    {
      "id": "sqlite-local",
      "type": "sqlite",
      "path": "./data/app.db",
      "readOnly": true
    }
  ],
  "cache": {
    "directory": ".sql-mcp-cache",
    "ttlMinutes": 60
  },
  "security": {
    "allowWrite": false,
    "allowedWriteOperations": ["INSERT", "UPDATE"],
    "disableDangerousOperations": true,
    "redactSecrets": true
  },
  "logging": {
    "level": "info",
    "pretty": false
  }
}

Secret Resolution

Recommended approach: secretRef

Keep secrets out of the MCP client config and out of the server config values themselves.

Example Configuration:

{
  "databases": [
    {
      "id": "production-db",
      "type": "postgres",
      "secretRef": "DATABASE_URL"
    }
  ]
}

The server resolves secretRef from the process environment first, and then from a .env file next to .mcp-database-server.config.

Environment File (.env):

DATABASE_URL=postgresql://user:password@localhost:5432/dbname
DB_URL_MYSQL=mysql://user:password@localhost:3306/dbname
DB_URL_MARIADB=mysql://report_user:[email protected]:3306/reporting
DB_URL_MSSQL=Server=host,1433;Database=db;User Id=sa;Password=pass

Alternative: credentialCommand

{
  "databases": [
    {
      "id": "analytics-db",
      "type": "mysql",
      "credentialCommand": "op read op://analytics/mysql/url"
    }
  ]
}

The command must print only the connection string to stdout. For safety, credentialCommand is only allowed when the server is launched with an explicit --config path. Auto-discovered configs cannot execute credential commands.

Still Supported: direct env interpolation

You can still write "url": "${DATABASE_URL}", but secretRef is the cleaner option because it makes the secret source explicit.

Best Practices:

  • ✅ Store .env file outside version control (add to .gitignore)
  • ✅ Use different .env files for each environment (dev, staging, prod)
  • ✅ Never commit credentials to git repositories
  • ✅ Use secret management services (AWS Secrets Manager, HashiCorp Vault) in production

Connection String Reference

DatabaseFormatExample
PostgreSQLpostgresql://user:pass@host:port/dbpostgresql://admin:secret@localhost:5432/myapp
MySQLmysql://user:pass@host:port/dbmysql://root:password@localhost:3306/myapp
MariaDBmysql://user:pass@host:port/dbmysql://report_user:[email protected]:3306/reporting
SQL ServerServer=host,port;Database=db;User Id=user;Password=passServer=localhost,1433;Database=myapp;User Id=sa;Password=secret
SQLiteUse path property"path": "./data/app.db" or "path": "/var/db/app.sqlite"

Additional Parameters:

PostgreSQL:

postgresql://user:pass@host:5432/db?sslmode=require&connect_timeout=10

MySQL:

mysql://user:pass@host:3306/db?charset=utf8mb4&timezone=Z

MariaDB:

mysql://user:pass@host:3306/db?charset=utf8mb4

SQL Server:

Server=host;Database=db;User Id=user;Password=pass;Encrypt=true;TrustServerCertificate=false

MCP Client Integration

Configuration File Locations

MCP ClientConfiguration File Path
Claude Desktop (macOS)~/Library/Application Support/Claude/claude_desktop_config.json
Claude Desktop (Windows)%APPDATA%\Claude\claude_desktop_config.json
Cline (VS Code)VS Code settings → MCP Servers
Other ClientsRefer to client-specific documentation

Setup Methods

Method 1: Global npm Installation

Configuration:

{
  "mcpServers": {
    "database": {
      "command": "mcp-database-server",
      "args": ["--config", "/absolute/path/to/.mcp-database-server.config"]
    }
  }
}

Method 2: Source Installation

Configuration:

{
  "mcpServers": {
    "database": {
      "command": "node",
      "args": [
        "/absolute/path/to/mcp-database-server/dist/index.js",
        "--config",
        "/absolute/path/to/.mcp-database-server.config"
      ]
    }
  }
}

Configuration Properties

PropertyDescriptionExample
commandExecutable to run. Use mcp-database-server for npm install, node for source install."mcp-database-server"
argsArray of command-line arguments. First arg is usually --config followed by config file path.["--config", "/path/to/config"]
envOptional environment variables passed to the server. Prefer secretRef with a local .env file or external secret tooling for DB credentials.{"APP_ENV": "production"}

Finding Absolute Paths:

# macOS/Linux
cd /path/to/mcp-database-server
pwd  # prints: /Users/username/projects/mcp-database-server

# Windows (PowerShell)
cd C:\path\to\mcp-database-server
$PWD.Path  # prints: C:\Users\username\projects\mcp-database-server

Available MCP Tools

This server provides 15 tools for comprehensive database interaction and optimization.

Tool Reference

ToolPurposeWrite AccessCached Data
list_databasesList all configured databases with statusNoUses cache
introspect_schemaDiscover and cache database schemaNoWrites cache
get_schemaRetrieve cached schema metadataNoReads cache
run_queryExecute SQL queries with safety controlsConditional*Updates stats
export_queryExport large read-only query results to a local fileNoNo cache
explain_queryAnalyze query execution plansNoNo cache
suggest_joinsGet intelligent join path recommendationsNoUses cache
clear_cacheClear schema cache and statisticsNoClears cache
cache_statusView cache health and statisticsNoReads cache
health_checkTest database connectivityNoNo cache
analyze_performanceGet detailed performance analyticsNoUses stats
suggest_indexesAnalyze queries and recommend indexesNoUses stats
detect_slow_queriesIdentify and alert on slow queriesNoUses stats
rewrite_querySuggest optimized query versionsNoUses cache
profile_queryProfile query performance with bottlenecksNoNo cache

* Requires allowWrite: true and respects security settings


1. list_databases

Lists all configured databases with their connection status and cache information.

Input Parameters:

None required.

Response:

[
  {
    "id": "postgres-main",
    "type": "postgres",
    "connected": true,
    "cached": true,
    "cacheAge": 45000,
    "version": "abc123"
  }
]

Response Fields:

FieldTypeDescription
idstringDatabase identifier from configuration
typestringDatabase type (postgres, mysql, sqlite, mssql, oracle)
connectedbooleanWhether database connection is active
cachedbooleanWhether schema is currently cached
cacheAgenumberAge of cached schema in milliseconds (if cached)
versionstringCache version hash (if cached)

2. introspect_schema

Discovers and caches complete database schema including tables, columns, indexes, foreign keys, and relationships.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase identifier to introspect
forceRefreshbooleanNoForce re-introspection even if cache is valid (default: false)
schemaFilterobjectNoFilter which objects to introspect
schemaFilter.includeSchemasstring[]NoOnly introspect these schemas (PostgreSQL/SQL Server)
schemaFilter.excludeSchemasstring[]NoSkip these schemas during introspection
schemaFilter.includeViewsbooleanNoInclude database views (default: true)
schemaFilter.maxTablesnumberNoLimit to first N tables

Example Request:

{
  "dbId": "postgres-main",
  "forceRefresh": false,
  "schemaFilter": {
    "includeSchemas": ["public"],
    "excludeSchemas": ["temp"],
    "includeViews": true,
    "maxTables": 100
  }
}

Response:

{
  "dbId": "postgres-main",
  "version": "a1b2c3d4",
  "introspectedAt": "2026-01-26T10:00:00.000Z",
  "schemas": [
    {
      "name": "public",
      "tableCount": 15,
      "viewCount": 3
    }
  ],
  "totalTables": 15,
  "totalRelationships": 12
}

3. get_schema

Retrieves detailed schema metadata from cache without querying the database.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase identifier
schemastringNoFilter to specific schema name
tablestringNoFilter to specific table name

Example Request:

{
  "dbId": "postgres-main",
  "schema": "public",
  "table": "users"
}

Response: Complete schema metadata including tables, columns, data types, indexes, foreign keys, and inferred relationships.


4. run_query

Executes SQL queries with automatic schema caching, relationship annotation, and comprehensive security controls.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase identifier to query
sqlstringYesSQL query to execute
paramsarrayNoParameterized query values (prevents SQL injection)
limitnumberNoMaximum number of rows to return
offsetnumberNoRow offset for paginated reads. Requires limit.
maxBytesnumberNoApproximate max serialized bytes for returned rows.
includeMetadatabooleanNoInclude relationship and query statistics metadata in the response. Default: true.
trackQuerybooleanNoTrack this query in history and performance analytics. Default: true.
timeoutMsnumberNoQuery timeout in milliseconds

Example Request:

{
  "dbId": "postgres-main",
  "sql": "SELECT * FROM users WHERE active = $1 ORDER BY id",
  "params": [true],
  "limit": 10,
  "offset": 0,
  "maxBytes": 32768,
  "includeMetadata": false,
  "trackQuery": false,
  "timeoutMs": 5000
}

Response:

{
  "rows": [
    {"id": 1, "name": "Alice", "email": "[email protected]", "active": true},
    {"id": 2, "name": "Bob", "email": "[email protected]", "active": true}
  ],
  "columns": ["id", "name", "email", "active"],
  "rowCount": 2,
  "executionTimeMs": 15,
  "metadata": {
    "relationships": [...],
    "queryStats": {
      "totalQueries": 10,
      "avgExecutionTime": 20,
      "errorCount": 0
    },
    "pagination": {
      "limit": 10,
      "offset": 0,
      "hasMore": true,
      "nextOffset": 10
    },
    "responseSize": {
      "maxBytes": 32768,
      "rowsBytes": 1842,
      "rowsTrimmed": false,
      "omittedRowCount": 0
    }
  }
}

For the fastest MariaDB/MySQL read path, set "includeMetadata": false and "trackQuery": false when you only need result rows and do not need relationship annotations, query history, or performance analytics for that request.

Security Controls:

  • ✅ Write operations blocked by default (allowWrite: false)
  • ✅ Dangerous operations (DELETE, TRUNCATE, DROP) disabled by default
  • ✅ Specific operations can be whitelisted via allowedWriteOperations
  • ✅ Per-database readOnly mode

5. explain_query

Retrieves database query execution plan without executing the query.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase identifier
sqlstringYesSQL query to analyze
paramsarrayNoQuery parameters (for parameterized queries)

Example Request:

{
  "dbId": "postgres-main",
  "sql": "SELECT * FROM users JOIN orders ON users.id = orders.user_id WHERE users.active = $1",
  "params": [true]
}

Response: Database-native execution plan (format varies by database type).


5a. export_query

Exports large read-only query results to a local file under .sql-mcp-cache/exports.

Execution Strategy:

  • MySQL/MariaDB uses adapter-level row streaming to avoid loading the full result set into memory.
  • PostgreSQL and SQLite use paged export by rewriting top-level LIMIT/OFFSET windows.
  • SQL Server export requires a future adapter-specific streaming path and will currently fail unless paging rewrite is supported.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase identifier
sqlstringYesRead-only SQL query to export
paramsarrayNoQuery parameters
formatstringNoOutput format: jsonl or csv (default: jsonl)
pageSizenumberNoPage size for non-streaming adapters (default: 1000)
fileNamestringNoOptional output file name written inside the export directory
timeoutMsnumberNoQuery timeout in milliseconds

Example Request:

{
  "dbId": "mariadb-reporting",
  "sql": "SELECT id, email, created_at FROM users ORDER BY id",
  "format": "jsonl",
  "fileName": "users-export.jsonl",
  "timeoutMs": 10000
}

Response:

{
  "dbId": "mariadb-reporting",
  "outputPath": "/absolute/path/to/.sql-mcp-cache/exports/users-export.jsonl",
  "format": "jsonl",
  "strategy": "stream",
  "rowsExported": 250000,
  "columns": ["id", "email", "created_at"],
  "fileSizeBytes": 18342011,
  "executionTimeMs": 8421
}

6. suggest_joins

Analyzes relationship graph to recommend optimal join paths between multiple tables.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase identifier
tablesstring[]YesArray of table names to join (2-10 tables)

Example Request:

{
  "dbId": "postgres-main",
  "tables": ["users", "orders", "products"]
}

Response:

[
  {
    "tables": ["users", "orders", "products"],
    "joins": [
      {
        "fromTable": "users",
        "toTable": "orders",
        "relationship": {
          "type": "one-to-many",
          "confidence": 1.0
        },
        "joinCondition": "users.id = orders.user_id"
      },
      {
        "fromTable": "orders",
        "toTable": "products",
        "relationship": {
          "type": "many-to-one",
          "confidence": 1.0
        },
        "joinCondition": "orders.product_id = products.id"
      }
    ],
    "sql": "FROM users JOIN orders ON users.id = orders.user_id JOIN products ON orders.product_id = products.id"
  }
]

7. clear_cache

Clears schema cache and query statistics for one or all databases.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringNoDatabase to clear (omit to clear all)

Example Request:

{
  "dbId": "postgres-main"
}

Response: Confirmation message.


8. cache_status

Retrieves detailed cache statistics and health information.

Input Parameters:

None required.

Response:

{
  "directory": ".sql-mcp-cache",
  "ttlMinutes": 10,
  "databases": [
    {
      "dbId": "postgres-main",
      "cached": true,
      "version": "abc123",
      "age": 120000,
      "expired": false,
      "tableCount": 15,
      "sizeBytes": 45678
    }
  ]
}

9. health_check

Tests database connectivity and returns status information.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringNoDatabase to check (omit to check all)

Response:

{
  "databases": [
    {
      "dbId": "postgres-main",
      "healthy": true,
      "connected": true,
      "version": "PostgreSQL 15.3",
      "responseTimeMs": 12
    }
  ]
}

10. analyze_performance

Get comprehensive performance analytics across all queries for a database.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase to analyze

Response:

{
  "totalQueries": 1250,
  "slowQueries": 23,
  "avgExecutionTime": 45.67,
  "p95ExecutionTime": 234.5,
  "errorRate": 1.2,
  "mostFrequentTables": [
    { "table": "users", "count": 456 },
    { "table": "orders", "count": 234 }
  ],
  "performanceTrend": "improving"
}

11. suggest_indexes

Analyze query patterns and recommend optimal database indexes.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase to analyze

Response:

[
  {
    "table": "orders",
    "columns": ["customer_id", "order_date"],
    "type": "composite",
    "reason": "Frequently used in WHERE and JOIN conditions",
    "impact": "high"
  },
  {
    "table": "products",
    "columns": ["category_id"],
    "type": "single",
    "reason": "Column category_id is frequently queried",
    "impact": "medium"
  }
]

12. detect_slow_queries

Identify queries that exceed performance thresholds and provide alerts.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase to analyze

Response:

[
  {
    "dbId": "postgres-main",
    "queryId": "a1b2c3",
    "sql": "SELECT * FROM large_table WHERE slow_column = ?",
    "executionTimeMs": 2500,
    "thresholdMs": 1000,
    "timestamp": "2024-01-27T10:30:00Z",
    "frequency": 5,
    "recommendations": [
      {
        "type": "add_index",
        "description": "Add index on slow_column for better performance",
        "impact": "high",
        "effort": "medium"
      }
    ]
  }
]

13. rewrite_query

Suggest optimized versions of SQL queries with performance improvements.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase ID
sqlstringYesSQL query to optimize

Response:

{
  "originalQuery": "SELECT * FROM users WHERE active = 1",
  "optimizedQuery": "SELECT id, name, email FROM users WHERE active = 1 LIMIT 1000",
  "improvements": [
    "Removed unnecessary SELECT *",
    "Added LIMIT clause to prevent large result sets"
  ],
  "performanceGain": 35,
  "confidence": "high"
}

14. profile_query

Profile a specific query's performance with detailed bottleneck analysis.

Input Parameters:

ParameterTypeRequiredDescription
dbIdstringYesDatabase ID
sqlstringYesSQL query to profile
paramsarrayNoQuery parameters

Response:

{
  "queryId": "def456",
  "sql": "SELECT u.name, COUNT(o.id) FROM users u JOIN orders o ON u.id = o.user_id GROUP BY u.id",
  "executionTimeMs": 1250,
  "rowCount": 5000,
  "bottlenecks": [
    {
      "type": "join",
      "severity": "high",
      "description": "Nested loop join on large tables",
      "estimatedCost": 150
    }
  ],
  "recommendations": [
    {
      "type": "add_index",
      "description": "Add index on orders.user_id",
      "impact": "high",
      "effort": "low"
    }
  ],
  "overallScore": 65
}

Resources

The server exposes cached schemas as MCP resources:

  • URI: schema://{dbId}
  • MIME Type: application/json
  • Content: Complete cached schema metadata

Schema Introspection

Automatic Discovery

The server automatically discovers:

  1. Tables and Views: All user tables and optionally views
  2. Columns: Name, data type, nullability, defaults, auto-increment
  3. Indexes: Including primary keys and unique constraints
  4. Foreign Keys: Explicit relationship metadata
  5. Relationships: Both explicit and inferred

Relationship Inference

When foreign keys are not defined, the server infers relationships using heuristics:

  • Column names matching {table}_id or {table}Id
  • Data type compatibility with target primary key
  • Confidence scoring for inferred relationships

Caching Strategy

  • Memory + Disk: Dual-layer caching for performance
  • TTL-based: Configurable time-to-live
  • Version Tracking: Content-based versioning (hash)
  • Concurrency Safe: Prevents duplicate introspection
  • On-Demand Refresh: Manual or automatic refresh

Query Tracking

The server maintains per-database query history:

  • Timestamp and SQL text
  • Execution time and row count
  • Referenced tables (best-effort extraction)
  • Error tracking
  • Aggregate statistics

Use this data to:

  • Monitor query performance
  • Identify frequently accessed tables
  • Detect query patterns
  • Debug issues

Development

# Install dependencies
npm install

# Run in development mode
npm run dev

# Build
npm run build

# Run tests
npm test

# Run tests with coverage
npm run test:coverage

# Lint
npm run lint

# Format code
npm run format

# Type check
npm run typecheck

Project Structure

src/
├── adapters/          # Database adapters
│   ├── base.ts        # Base adapter class
│   ├── postgres.ts    # PostgreSQL adapter
│   ├── mysql.ts       # MySQL adapter
│   ├── sqlite.ts      # SQLite adapter
│   ├── mssql.ts       # SQL Server adapter
│   ├── oracle.ts      # Oracle adapter (stub)
│   └── index.ts       # Adapter factory
├── cache.ts           # Schema caching
├── config.ts          # Configuration loader
├── database-manager.ts # Database orchestration
├── logger.ts          # Logging setup
├── mcp-server.ts      # MCP server implementation
├── query-tracker.ts   # Query history tracking
├── types.ts           # TypeScript types
├── utils.ts           # Utility functions
└── index.ts           # Entry point

Adding New Database Adapters

  1. Implement the DatabaseAdapter interface in src/adapters/
  2. Follow the pattern from existing adapters
  3. Add to adapter factory in src/adapters/index.ts
  4. Update type definitions if needed
  5. Add tests

Example:

import { BaseAdapter } from './base.js';

export class CustomAdapter extends BaseAdapter {
  async connect(): Promise<void> { /* ... */ }
  async disconnect(): Promise<void> { /* ... */ }
  async introspect(): Promise<DatabaseSchema> { /* ... */ }
  async query(): Promise<QueryResult> { /* ... */ }
  async explain(): Promise<ExplainResult> { /* ... */ }
  async testConnection(): Promise<boolean> { /* ... */ }
  async getVersion(): Promise<string> { /* ... */ }
}

Troubleshooting

Connection Issues

  • Verify connection strings and credentials
  • Check network connectivity and firewall rules
  • Enable debug logging: "logging": { "level": "debug" }
  • Use health_check tool to test connectivity

Cache Issues

  • Clear cache: Use clear_cache tool
  • Check cache directory permissions
  • Verify TTL settings
  • Review cache status with cache_status tool

Performance

  • Adjust connection pool settings
  • Use maxTables to limit introspection scope
  • Set appropriate cache TTL
  • Enable read-only mode when possible

Oracle Setup

The Oracle adapter requires additional setup:

  1. Install Oracle Instant Client
  2. Set environment variables (LD_LIBRARY_PATH or PATH)
  3. Install oracledb package
  4. Implement stub methods in src/adapters/oracle.ts

Security Considerations

  • Always use read-only mode in production unless write access is required
  • Use environment variables for credentials, never hardcode
  • Enable secret redaction in logs
  • Restrict write operations with allowedWriteOperations
  • Use connection string encryption where supported
  • Regular security audits of configurations

License

MIT

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Ensure all tests pass
  5. Submit a pull request

Support

For issues, questions, or feature requests, please open an issue on GitHub.

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension