Build robust data workflows, integrations, and analytics on a single intuitive platform.
Connect your AI agents, MCP clients (Cursor, Claude, Windsurf, VS Code ...) and other AI assistants to Keboola. Expose data, transformations, SQL queries, and job triggers—no glue code required. Deliver the right data to agents when and where they need it.
Keboola MCP Server is an open-source bridge between your Keboola project and modern AI tools. It turns Keboola features—like storage access, SQL transformations, and job triggers—into callable tools for Claude, Cursor, CrewAI, LangChain, Amazon Q, and more.
Make sure you have:
Note: Make sure you have uv
installed. The MCP client will use it to automatically download and run the Keboola MCP Server.
Installing uv:
macOS/Linux:
#if homebrew is not installed on your machine use:
# /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install using Homebrew
brew install uv
Windows:
# Using the installer script
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or using pip
pip install uv
# Or using winget
winget install --id=astral-sh.uv -e
For more installation options, see the official uv documentation.
Before setting up the MCP server, you need three key pieces of information:
This is your authentication token for Keboola:
For instructions on how to create and manage Storage API tokens, refer to the official Keboola documentation.
Note: If you want the MCP server to have limited access, use custom storage token, if you want the MCP to access everything in your project, use the master token.
This identifies your workspace in Keboola and is required for SQL queries:
Follow this Keboola guide to get your KBC_WORKSPACE_SCHEMA.
Note: Check Grant read-only access to all Project data option when creating the workspace
Your Keboola API URL depends on your deployment region. You can determine your region by looking at the URL in your browser when logged into your Keboola project:
Region | API URL |
---|---|
AWS North America | https://connection.keboola.com |
AWS Europe | https://connection.eu-central-1.keboola.com |
Google Cloud EU | https://connection.europe-west3.gcp.keboola.com |
Google Cloud US | https://connection.us-east4.gcp.keboola.com |
Azure EU | https://connection.north-europe.azure.keboola.com |
If your Keboola project uses BigQuery backend, you will need to set GOOGLE_APPLICATION_CREDENTIALS
environment variable in addition to KBC_STORAGE_TOKEN
and KBC_WORKSPACE_SCHEMA
:
GOOGLE_APPLICATION_CREDENTIALS
environment variableThere are four ways to use the Keboola MCP Server, depending on your needs:
In this mode, Claude or Cursor automatically starts the MCP server for you. You do not need to run any commands in your terminal.
{
"mcpServers": {
"keboola": {
"command": "uvx",
"args": [
"keboola_mcp_server",
"--api-url", "https://connection.YOUR_REGION.keboola.com"
],
"env": {
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema"
}
}
}
}
Note: For BigQuery users, add the following line into "env": {}: "GOOGLE_APPLICATION_CREDENTIALS": "/full/path/to/credentials.json"
Config file locations:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"keboola": {
"command": "uvx",
"args": [
"keboola_mcp_server",
"--api-url", "https://connection.YOUR_REGION.keboola.com"
],
"env": {
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema"
}
}
}
}
Note: For BigQuery users, add the following line into "env": {}: "GOOGLE_APPLICATION_CREDENTIALS": "/full/path/to/credentials.json"
When running the MCP server from Windows Subsystem for Linux with Cursor AI, use this configuration:
{
"mcpServers": {
"keboola": {
"command": "wsl.exe",
"args": [
"bash",
"-c",
"'source /wsl_path/to/keboola-mcp-server/.env",
"&&",
"/wsl_path/to/keboola-mcp-server/.venv/bin/python -m keboola_mcp_server.cli --transport stdio'"
]
}
}
}
Where /wsl_path/to/keboola-mcp-server/.env
file contains environment variables:
export KBC_STORAGE_TOKEN="your_keboola_storage_token"
export KBC_WORKSPACE_SCHEMA="your_workspace_schema"
For developers working on the MCP server code itself:
{
"mcpServers": {
"keboola": {
"command": "/absolute/path/to/.venv/bin/python",
"args": [
"-m", "keboola_mcp_server.cli",
"--transport", "stdio",
"--api-url", "https://connection.YOUR_REGION.keboola.com"
],
"env": {
"KBC_STORAGE_TOKEN": "your_keboola_storage_token",
"KBC_WORKSPACE_SCHEMA": "your_workspace_schema",
}
}
}
}
Note: For BigQuery users, add the following line into "env": {}: "GOOGLE_APPLICATION_CREDENTIALS": "/full/path/to/credentials.json"
You can run the server manually in a terminal for testing or debugging:
# Set environment variables
export KBC_STORAGE_TOKEN=your_keboola_storage_token
export KBC_WORKSPACE_SCHEMA=your_workspace_schema
# For BigQuery users
# export GOOGLE_APPLICATION_CREDENTIALS=/full/path/to/credentials.json
# Run with uvx (no installation needed)
uvx keboola_mcp_server --api-url https://connection.YOUR_REGION.keboola.com
# OR, if developing locally
python -m keboola_mcp_server.cli --api-url https://connection.YOUR_REGION.keboola.com
Note: This mode is primarily for debugging or testing. For normal use with Claude or Cursor, you do not need to manually run the server.
docker pull keboola/mcp-server:latest
# For Snowflake users
docker run -it \
-e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \
-e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \
keboola/mcp-server:latest \
--api-url https://connection.YOUR_REGION.keboola.com
# For BigQuery users (add credentials volume mount)
# docker run -it \
# -e KBC_STORAGE_TOKEN="YOUR_KEBOOLA_STORAGE_TOKEN" \
# -e KBC_WORKSPACE_SCHEMA="YOUR_WORKSPACE_SCHEMA" \
# -e GOOGLE_APPLICATION_CREDENTIALS="/creds/credentials.json" \
# -v /local/path/to/credentials.json:/creds/credentials.json \
# keboola/mcp-server:latest \
# --api-url https://connection.YOUR_REGION.keboola.com
Scenario | Need to Run Manually? | Use This Setup |
---|---|---|
Using Claude/Cursor | No | Configure MCP in app settings |
Developing MCP locally | No (Claude starts it) | Point config to python path |
Testing CLI manually | Yes | Use terminal to run |
Using Docker | Yes | Run docker container |
Once your MCP client (Claude/Cursor) is configured and running, you can start querying your Keboola data:
You can start with a simple query to confirm everything is working:
What buckets and tables are in my Keboola project?
Data Exploration:
Data Analysis:
Data Pipelines:
MCP Client | Support Status | Connection Method |
---|---|---|
Claude (Desktop & Web) | ✅ supported, tested | stdio |
Cursor | ✅ supported, tested | stdio |
Windsurf, Zed, Replit | ✅ Supported | stdio |
Codeium, Sourcegraph | ✅ Supported | HTTP+SSE |
Custom MCP Clients | ✅ Supported | HTTP+SSE or stdio |
Note: Keboola MCP is pre-1.0, so some breaking changes might occur. Your AI agents will automatically adjust to new tools.
Category | Tool | Description |
---|---|---|
Storage | retrieve_buckets | Lists all storage buckets in your Keboola project |
get_bucket_detail | Retrieves detailed information about a specific bucket | |
retrieve_bucket_tables | Returns all tables within a specific bucket | |
get_table_detail | Provides detailed information for a specific table | |
update_bucket_description | Updates the description of a bucket | |
update_column_description | Updates the description for a given column in a table. | |
update_table_description | Updates the description of a table | |
SQL | query_table | Executes custom SQL queries against your data |
get_sql_dialect | Identifies whether your workspace uses Snowflake or BigQuery SQL dialect | |
Component | retrieve_components | Lists all available extractors, writers, and applications |
get_component_details | Retrieves detailed configuration information for a specific component | |
retrieve_transformations | Returns all transformation configurations in your project | |
create_sql_transformation | Creates a new SQL transformation with custom queries | |
update_sql_transformation | Updates an existing SQL transformation configuration, sql query, description or disables the configuration | |
Job | retrieve_jobs | Lists and filters jobs by status, component, or configuration |
get_job_detail | Returns comprehensive details about a specific job | |
start_job | Triggers a component or transformation job to run | |
Documentation | docs_query | Searches Keboola documentation based on natural language queries |
Issue | Solution |
---|---|
Authentication Errors | Verify KBC_STORAGE_TOKEN is valid |
Workspace Issues | Confirm KBC_WORKSPACE_SCHEMA is correct |
Connection Timeout | Check network connectivity |
Basic setup:
uv sync --extra dev
With the basic setup, you can use uv run tox
to run tests and check code style.
Recommended setup:
uv sync --extra dev --extra tests --extra integtests --extra codestyle
With the recommended setup, packages for testing and code style checking will be installed which allows IDEs like VsCode or Cursor to check the code or run tests during development.
To run integration tests locally, use uv run tox -e integtests
.
NOTE: You will need to set the following environment variables:
INTEGTEST_STORAGE_API_URL
INTEGTEST_STORAGE_TOKEN
INTEGTEST_WORKSPACE_SCHEMA
In order to get these values, you need a dedicated Keboola project for integration tests.
uv.lock
Update the uv.lock
file if you have added or removed dependencies. Also consider updating the lock with newer dependency
versions when creating a release (uv lock --upgrade
).
⭐ The primary way to get help, report bugs, or request features is by opening an issue on GitHub. ⭐
The development team actively monitors issues and will respond as quickly as possible. For general information about Keboola, please use the resources below.
GitLab API, enabling project management
Retrieving and analyzing issues from Sentry.io
Create crafted UI components inspired by the best 21st.dev design engineers.
Connect to any function, any language, across network boundaries using AgentRPC.
APIMatic MCP Server is used to validate OpenAPI specifications using APIMatic. The server processes OpenAPI files and returns validation summaries by leveraging APIMatic’s API.
Bring the full power of BrowserStack’s Test Platform to your AI tools, making testing faster and easier for every developer and tester on your team.
Flag features, manage company data, and control feature access using Bucket
Enable AI Agents to fix build failures from CircleCI.
Introspect and query your apps deployed to Convex.
Enable AI Agents to fix Playwright test failures reported to Currents.