Official MCP server for dbt (data build tool) providing integration with dbt Core/Cloud CLI, project metadata discovery, model information, and semantic layer querying capabilities.
This MCP (Model Context Protocol) server provides tools to interact with dbt. Read this blog to learn more. Add comments or questions to GitHub Issues or join us in the community Slack in the #tools-dbt-mcp
channel.
.env.example
file locally under a file called .env
and set it with your specific environment variables (see the Configuration
section of the README.md
)The MCP server takes the following environment variable configuration:
Name | Default | Description |
---|---|---|
DISABLE_DBT_CLI | false | Set this to true to disable dbt Core, dbt Cloud CLI, and dbt Fusion MCP tools |
DISABLE_SEMANTIC_LAYER | false | Set this to true to disable dbt Semantic Layer MCP objects |
DISABLE_DISCOVERY | false | Set this to true to disable dbt Discovery API MCP objects |
DISABLE_REMOTE | true | Set this to false to enable remote MCP objects |
Name | Default | Description |
---|---|---|
DBT_HOST | cloud.getdbt.com | Your dbt Cloud instance hostname. This will look like an Access URL found here. If you are using Multi-cell, do not include the ACCOUNT_PREFIX here |
MULTICELL_ACCOUNT_PREFIX | - | If you are using Multi-cell, set this to your ACCOUNT_PREFIX . If you are not using Multi-cell, do not set this environment variable. You can learn more here |
DBT_TOKEN | - | Your personal access token or service token. Note: a service token is required when using the Semantic Layer and this service token should have at least Semantic Layer Only , Metadata Only , and Developer permissions. |
DBT_PROD_ENV_ID | - | Your dbt Cloud production environment ID |
Name | Description |
---|---|
DBT_DEV_ENV_ID | Your dbt Cloud development environment ID |
DBT_USER_ID | Your dbt Cloud user ID |
Name | Description |
---|---|
DBT_PROJECT_DIR | The path to where the repository of your dbt Project is hosted locally. This should look something like /Users/firstnamelastname/reponame |
DBT_PATH | The path to your dbt Core, dbt Cloud CLI, or dbt Fusion executable. You can find your dbt executable by running which dbt |
It is also possible to set any environment variable supported by your dbt executable (see here for the ones supported in dbt Core).
We automatically set DBT_WARN_ERROR_OPTIONS='{"error": ["NoNodesForSelectionCriteria"]}'
so that the MCP server knows if no node is selected when running a dbt command.
You can overwrite it if needed but we believe that it provides a better experience when calling dbt from the MCP server, making sure that the tool is selecting valid nodes.
After going through the Setup, you can use dbt-mcp with an MCP client.
Add this configuration to the respective client's config file. Be sure to replace the sections within <>
:
{
"mcpServers": {
"dbt-mcp": {
"command": "uvx",
"args": [
"--env-file",
"<path-to-.env-file>",
"dbt-mcp"
]
},
}
}
<path-to-.env-file>
is where you saved the .env
file from the Setup step
Follow these instructions to create the claude_desktop_config.json
file and connect.
For debugging, you can find the Claude Desktop logs at ~/Library/Logs/Claude
for Mac or %APPDATA%\Claude\logs
for Windows.
Note the configuration options here and input your selections with this link:
Cursor MCP docs here for reference
Open the Settings menu (Command + Comma) and select the correct tab atop the page for your use case
Workspace
- configures the server in the context of your workspaceUser
- configures the server in the context of your userSelect Features → Chat
Ensure that "Mcp" is Enabled
Click "Edit in settings.json" under "Mcp > Discovery"
Add your server configuration (dbt
) to the provided settings.json
file as one of the servers:
{
"mcp": {
"inputs": [],
"servers": {
"dbt": {
"command": "uvx",
"args": [
"--env-file",
"<path-to-.env-file>",
"dbt-mcp"
]
},
}
}
}
<path-to-.env-file>
is where you saved the .env
file from the Setup step
MCP: List Servers
command from the Command Palette (Control + Command + P) and selecting the serversettings.json
fileVS Code MCP docs here for reference
uvx
from the JSON config. If this happens, try finding the full path to uvx
with which uvx
on Unix systems and placing this full path in the JSON. For instance: "command": "/the/full/path/to/uvx"
.build
- Executes models, tests, snapshots, and seeds in dependency ordercompile
- Generates executable SQL from models, tests, and analyses without running themdocs
- Generates documentation for the dbt projectls
(list) - Lists resources in the dbt project, such as models and testsparse
- Parses and validates the project’s files for syntax correctnessrun
- Executes models to materialize them in the databasetest
- Runs tests to validate data and model integrityshow
- Runs a query against the data warehouseAllowing your client to utilize dbt commands through this MCP tooling could modify your data models, sources, and warehouse objects. Proceed only if you trust the client and understand the potential impact.
list_metrics
- Retrieves all defined metricsget_dimensions
- Gets dimensions associated with specified metricsget_entities
- Gets entities associated with specified metricsquery_metrics
- Queries metrics with optional grouping, ordering, filtering, and limitingget_mart_models
- Gets all mart modelsget_all_models
- Gets all modelsget_model_details
- Gets details for a specific modelget_model_parents
- Gets parent nodes of a specific modelget_model_children
- Gets children modes of a specific modeltext_to_sql
- Generate SQL from natural language requestsexecute_sql
- Execute SQL on dbt Cloud's backend infrastructure with support for Semantic Layer SQL syntax. Note: using a PAT instead of a service token for DBT_TOKEN
is required for this tool.Read CONTRIBUTING.md
for instructions on how to get involved!
Access the NFTGo Developer API for comprehensive NFT data and analytics. Requires an NFTGo API key.
Official MCP Server from Atlan which enables you to bring the power of metadata to your AI tools
An MCP-based database server with support for SQLite, MySQL, PostgreSQL, and MSSQL.
A read-only MCP server for querying live data from Square using the CData JDBC Driver.
Interact with the data stored in Couchbase clusters using natural language.
MCP server acting as an interface to the Frankfurter API for currency exchange data.
Embeddings, vector search, document storage, and full-text search with the open-source AI application database
Hydrolix time-series datalake integration providing schema exploration and query capabilities to LLM-based workflows.
Deliver real-time investment research with extensive private and public market data.
Connect AI tools with Pinecone projects to search, configure indexes, generate code, and manage data.