bricks and context
Production-grade MCP server for Databricks: SQL Warehouses, Jobs API, multi-workspace support.
๐งฑ Bricks and Context
Production-grade Model Context Protocol (MCP) server for Databricks
SQL Warehouses ยท Jobs API ยท Multi-Workspace ยท Built for AI Agents
โจ What is this?
Bricks and Context lets AI assistants (Cursor, Claude Desktop, etc.) talk directly to your Databricks workspaces through the Model Context Protocol.
Think of it as a bridge: your AI asks questions, this server translates them into Databricks API calls, and returns structured, AI-friendly responses.
Why use this?
| Pain Point | How we solve it |
|---|---|
| AI gets overwhelmed by huge query results | Bounded outputs โ configurable row/byte/cell limits |
| Flaky connections cause random failures | Retries + circuit breakers โ automatic fault tolerance |
| Managing multiple environments is tedious | Multi-workspace โ switch between dev/prod with one parameter |
| Raw API responses confuse AI models | Markdown tables โ structured, LLM-optimized output |
๐ง Available Tools
| Tool | What it does |
|---|---|
execute_sql_query | Run SQL with bounded, AI-safe output |
discover_schemas | List all schemas in the workspace |
discover_tables | List tables in a schema with metadata |
describe_table | Get column types, nullability, structure |
get_table_sample | Preview rows for data exploration |
connection_health | Verify Databricks connectivity |
| Tool | What it does |
|---|---|
list_jobs | List jobs with optional name filtering |
get_job_details | Full job config: schedule, cluster, tasks |
get_job_runs | Run history with state and duration |
trigger_job | Start a job with optional parameters |
cancel_job_run | Stop a running job |
get_job_run_output | Retrieve logs, errors, notebook output |
| Tool | What it does |
|---|---|
cache_stats | Hit rates, memory usage, category breakdown |
performance_stats | Operation latencies, error rates, health |
๐ Quick Start
1. Clone & Install
git clone https://github.com/laraib-sidd/bricks-and-context.git
cd bricks-and-context
uv sync # or: pip install -e .
2. Configure Workspaces
Copy the template and add your credentials:
cp auth.template.yaml auth.yaml
Edit auth.yaml:
default_workspace: dev
workspaces:
- name: dev
host: your-dev.cloud.databricks.com
token: dapi...
http_path: /sql/1.0/warehouses/...
- name: prod
host: your-prod.cloud.databricks.com
token: dapi...
http_path: /sql/1.0/warehouses/...
๐ก
auth.yamlis gitignored. Your secrets stay local.
3. Run
python run_mcp_server.py
๐ฏ Cursor Integration
Cursor uses stdio transport and doesn't inherit your shell environment. You need explicit paths.
Step 1: Ensure dependencies are installed
cd /path/to/bricks-and-context
uv sync
Step 2: Open MCP settings in Cursor
Cmd+Shift+P โ "Open MCP Settings" โ Opens ~/.cursor/mcp.json
Step 3: Add this configuration
Using uv run (recommended):
{
"mcpServers": {
"databricks": {
"command": "uv",
"args": [
"--directory", "/path/to/bricks-and-context",
"run", "python", "run_mcp_server.py"
],
"env": {
"MCP_AUTH_PATH": "/path/to/bricks-and-context/auth.yaml",
"MCP_CONFIG_PATH": "/path/to/bricks-and-context/config.json"
}
}
}
}
Or using venv directly:
{
"mcpServers": {
"databricks": {
"command": "/path/to/bricks-and-context/.venv/bin/python",
"args": ["/path/to/bricks-and-context/run_mcp_server.py"],
"env": {
"MCP_AUTH_PATH": "/path/to/bricks-and-context/auth.yaml",
"MCP_CONFIG_PATH": "/path/to/bricks-and-context/config.json"
}
}
}
}
Step 4: Restart Cursor
Reload the window to activate the MCP server.
Test it
Ask your AI:
- "List my Databricks jobs"
- "Run
SELECT 1on Databricks" - "Describe the table
catalog.schema.my_table"
๐ Multi-Workspace
Define multiple workspaces in auth.yaml, then select per-call:
execute_sql_query(sql="SELECT 1", workspace="prod")
list_jobs(limit=10, workspace="dev")
When workspace is omitted, the server uses default_workspace.
โ๏ธ Configuration
config.json โ Tunable settings (committed)
| Setting | Default | Description |
|---|---|---|
max_connections | 10 | Connection pool size |
max_result_rows | 200 | Max rows returned per query |
max_result_bytes | 262144 | Max response size (256KB) |
max_cell_chars | 200 | Truncate long cell values |
allow_write_queries | false | Enable INSERT/UPDATE/DELETE |
enable_sql_retries | true | Retry transient SQL failures |
enable_query_cache | false | Cache repeated queries |
query_cache_ttl_seconds | 300 | Cache TTL |
databricks_api_timeout_seconds | 30 | Jobs API timeout |
Any setting can be overridden via environment variable (uppercase, e.g.,
MAX_RESULT_ROWS=500).
๐๏ธ Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MCP Client (Cursor / Claude) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ stdio
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ FastMCP Server โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ SQL Tools โ โ Job Tools โ โ Observability โ โ
โ โโโโโโโโฌโโโโโโโ โโโโโโโโฌโโโโโโโ โโโโโโโโโโโโโฌโโโโโโโโโโโโโโ โ
โโโโโโโโโโโผโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโ
โ โ โ
โผ โผ โผ
โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโ
โ Connection Pool โ โ Job Manager โ โ Cache / Perf Monitor โ
โ (SQL Connector) โ โ (REST API 2.1) โ โ โ
โโโโโโโโโโฌโโโโโโโโโโ โโโโโโโโโโฌโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โโโโโโโโโโฌโโโโโโโโโโโโ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Databricks Workspace(s) โ
โ SQL Warehouse Jobs Service โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ก๏ธ Reliability Features
| Feature | Description |
|---|---|
| Bounded outputs | Rows, bytes, and cell-character limits prevent OOM |
| Connection pooling | Thread-safe with per-connection health validation |
| Retry with backoff | Exponential backoff + jitter for transient failures |
| Circuit breakers | Automatic fault isolation, prevents cascading failures |
| Query caching | Optional TTL-based caching for repeated queries |
๐งโ๐ป Development
uv sync --dev # Install dev dependencies
uv run pytest # Run tests
uv run black . # Format code
uv run mypy src/ # Type check
๐ License
MIT โ see LICENSE
Related Servers
Database Updater
Update various databases (PostgreSQL, MySQL, MongoDB, SQLite) using data from CSV and Excel files.
CData Adobe Analytics
A read-only MCP server to query live Adobe Analytics data. Requires the CData JDBC Driver for Adobe Analytics.
MySQL MCP
A secure MCP service for accessing and managing MySQL databases, featuring multi-layer security and high-performance connection pooling.
dbt-docs
MCP server for dbt-core (OSS) users as the official dbt MCP only supports dbt Cloud. Supports project metadata, model and column-level lineage and dbt documentation.
Supabase Coolify MCP Server
Comprehensive MCP server for managing self-hosted Supabase on Coolify with full deployment, migrations, edge functions, and rollback support.
BigQuery
Access Google BigQuery to understand dataset structures and execute SQL queries.
MS SQL MCP Server
A bridge for AI assistants to directly query and explore Microsoft SQL Server databases.
Insights Knowledge Base
A free, plug-and-play knowledge base with over 10,000 built-in insight reports and support for parsing private documents.
Solana Launchpads MCP
Tracks daily activity and graduate metrics across multiple Solana launchpads using the Dune Analytics API.
Trino MCP Server
Securely interact with Trino databases to list tables, read data, and execute SQL queries.