Universal database MCP server supporting multiple database types including PostgreSQL, Redshift, CockroachDB, MySQL, RDS MySQL, Microsoft SQL Server, BigQuery, Oracle DB, and SQLite
A server that helps people access and query data in databases using the Legion Query Runner with integration of the Model Context Protocol (MCP) Python SDK.
This tool is provided by Legion AI. To use the full-fledged and fully powered AI data analytics tool, please visit the site. Email us if there is one database you want us to support.
Database MCP stands out from other database access solutions for several compelling reasons:
Whether you're building AI agents that need database access or simply want a unified interface to multiple databases, Database MCP provides a streamlined solution that dramatically reduces development time and complexity.
Database | DB_TYPE code |
---|---|
PostgreSQL | pg |
Redshift | redshift |
CockroachDB | cockroach |
MySQL | mysql |
RDS MySQL | rds_mysql |
Microsoft SQL Server | mssql |
Big Query | bigquery |
Oracle DB | oracle |
SQLite | sqlite |
We use Legion Query Runner library as connectors. You can find more info on their api doc.
The Model Context Protocol (MCP) is a specification for maintaining context in AI applications. This server uses the MCP Python SDK to:
For single database configuration:
For multi-database configuration:
The configuration format varies by database type. See the API documentation for database-specific configuration details.
When using uv
, no specific installation is needed. We will use uvx
to directly run database-mcp.
UV Configuration Example (Single Database):
REPLACE DB_TYPE and DB_CONFIG with your connection info.
{
"mcpServers": {
"database-mcp": {
"command": "uvx",
"args": [
"database-mcp"
],
"env": {
"DB_TYPE": "pg",
"DB_CONFIG": "{\"host\":\"localhost\",\"port\":5432,\"user\":\"user\",\"password\":\"pw\",\"dbname\":\"dbname\"}"
},
"disabled": true,
"autoApprove": []
}
}
}
UV Configuration Example (Multiple Databases):
{
"mcpServers": {
"database-mcp": {
"command": "uvx",
"args": [
"database-mcp"
],
"env": {
"DB_CONFIGS": "[{\"id\":\"pg_main\",\"db_type\":\"pg\",\"configuration\":{\"host\":\"localhost\",\"port\":5432,\"user\":\"user\",\"password\":\"pw\",\"dbname\":\"postgres\"},\"description\":\"PostgreSQL Database\"},{\"id\":\"mysql_data\",\"db_type\":\"mysql\",\"configuration\":{\"host\":\"localhost\",\"port\":3306,\"user\":\"root\",\"password\":\"pass\",\"database\":\"mysql\"},\"description\":\"MySQL Database\"}]"
},
"disabled": true,
"autoApprove": []
}
}
}
Install via pip:
pip install database-mcp
PIP Configuration Example (Single Database):
{
"mcpServers": {
"database": {
"command": "python",
"args": [
"-m", "database_mcp",
"--repository", "path/to/git/repo"
],
"env": {
"DB_TYPE": "pg",
"DB_CONFIG": "{\"host\":\"localhost\",\"port\":5432,\"user\":\"user\",\"password\":\"pw\",\"dbname\":\"dbname\"}"
}
}
}
}
python mcp_server.py
export DB_TYPE="pg" # or mysql, postgresql, etc.
export DB_CONFIG='{"host":"localhost","port":5432,"user":"username","password":"password","dbname":"database_name"}'
uv run src/database_mcp/mcp_server.py
export DB_CONFIGS='[{"id":"pg_main","db_type":"pg","configuration":{"host":"localhost","port":5432,"user":"username","password":"password","dbname":"database_name"},"description":"PostgreSQL Database"},{"id":"mysql_users","db_type":"mysql","configuration":{"host":"localhost","port":3306,"user":"root","password":"pass","database":"mysql"},"description":"MySQL Database"}]'
uv run src/database_mcp/mcp_server.py
If you don't specify an ID, the system will generate one automatically based on the database type and description:
export DB_CONFIGS='[{"db_type":"pg","configuration":{"host":"localhost","port":5432,"user":"username","password":"password","dbname":"database_name"},"description":"PostgreSQL Database"},{"db_type":"mysql","configuration":{"host":"localhost","port":3306,"user":"root","password":"pass","database":"mysql"},"description":"MySQL Database"}]'
# IDs will be generated as something like "pg_postgres_0" and "my_mysqldb_1"
uv run src/database_mcp/mcp_server.py
python mcp_server.py --db-type pg --db-config '{"host":"localhost","port":5432,"user":"username","password":"password","dbname":"database_name"}'
python mcp_server.py --db-configs '[{"id":"pg_main","db_type":"pg","configuration":{"host":"localhost","port":5432,"user":"username","password":"password","dbname":"database_name"},"description":"PostgreSQL Database"},{"id":"mysql_users","db_type":"mysql","configuration":{"host":"localhost","port":3306,"user":"root","password":"pass","database":"mysql"},"description":"MySQL Database"}]'
Note that you can specify custom IDs for each database using the id
field, or let the system generate them based on database type and description.
When connecting to multiple databases, you need to specify which database to use for each query:
list_databases
tool to see available databases with their IDsget_database_info
to view schema details of databasesfind_table
to locate a table across all databasesdb_id
parameter to tools like execute_query
, get_table_columns
, etc.Database connections are managed internally as a dictionary of DbConfig
objects, with each database having a unique ID. Schema information is represented as a list of table objects, where each table contains its name and column information.
The select_database
prompt guides users through the database selection process.
Database schemas are represented as a list of table objects, with each table containing information about its columns:
[
{
"name": "users",
"columns": [
{"name": "id", "type": "integer"},
{"name": "username", "type": "varchar"},
{"name": "email", "type": "varchar"}
]
},
{
"name": "orders",
"columns": [
{"name": "id", "type": "integer"},
{"name": "user_id", "type": "integer"},
{"name": "product_id", "type": "integer"},
{"name": "quantity", "type": "integer"}
]
}
]
This representation makes it easy to programmatically access table and column information while keeping a clean hierarchical structure.
Resource | Description |
---|---|
resource://schema/{database_id} | Get the schemas for one or all configured databases |
Tool | Description |
---|---|
execute_query | Execute a SQL query and return results as a markdown table |
execute_query_json | Execute a SQL query and return results as JSON |
get_table_columns | Get column names for a specific table |
get_table_types | Get column types for a specific table |
get_query_history | Get the recent query history |
list_databases | List all available database connections |
get_database_info | Get detailed information about a database including schema |
find_table | Find which database contains a specific table |
describe_table | Get detailed description of a table including column names and types |
get_table_sample | Get a sample of data from a table |
All database-specific tools (like execute_query
, get_table_columns
, etc.) require a db_id
parameter to specify which database to use.
Prompt | Description |
---|---|
sql_query | Create an SQL query against the database |
explain_query | Explain what a SQL query does |
optimize_query | Optimize a SQL query for better performance |
select_database | Help user select which database to use |
run this to start the inspector
npx @modelcontextprotocol/inspector uv run src/database_mcp/mcp_server.py
then in the command input field, set something like
run src/database_mcp/mcp_server.py --db-type pg --db-config '{"host":"localhost","port":5432,"user":"username","password":"password","dbname":"database_name"}'
uv pip install -e ".[dev]"
pytest
# Clean up build artifacts
rm -rf dist/ build/
# Remove any .egg-info directories if they exist
find . -name "*.egg-info" -type d -exec rm -rf {} + 2>/dev/null || true
# Build the package
uv run python -m build
# Upload to PyPI
uv run python -m twine upload dist/*
This repository is licensed under GPL
Read-only database access with schema inspection
Database interaction and business intelligence capabilities
Embeddings, vector search, document storage, and full-text search with the open-source AI application database
Query your ClickHouse database server.
Interact with the data stored in Couchbase clusters using natural language.
Provides AI assistants with a secure and structured way to explore and analyze data in GreptimeDB.
Hydrolix time-series datalake integration providing schema exploration and query capabilities to LLM-based workflows.
Open source MCP server specializing in easy, fast, and secure tools for Databases.
Interact & query with Meilisearch (Full-text & semantic search API)
Search, Query and interact with data in your Milvus Vector Database.