Inspect database schemas and execute queries on Google BigQuery.
A Model Context Protocol server that provides access to BigQuery. This server enables LLMs to inspect database schemas and execute queries.
The server implements one tool:
execute-query
: Executes a SQL query using BigQuery dialectlist-tables
: Lists all tables in the BigQuery databasedescribe-table
: Describes the schema of a specific tableThe server can be configured with the following arguments:
--project
(required): The GCP project ID.--location
(required): The GCP location (e.g. europe-west9
).--dataset
(optional): Only take specific BigQuery datasets into consideration. Several datasets can be specified by repeating the argument (e.g. --dataset my_dataset_1 --dataset my_dataset_2
). If not provided, all datasets in the project will be considered.--key-file
(optional): Path to a service account key file for BigQuery. If not provided, the server will use the default credentials.To install BigQuery Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install mcp-server-bigquery --client claude
On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
"mcpServers": {
"bigquery": {
"command": "uv",
"args": [
"--directory",
"{{PATH_TO_REPO}}",
"run",
"mcp-server-bigquery",
"--project",
"{{GCP_PROJECT_ID}}",
"--location",
"{{GCP_LOCATION}}"
]
}
}
"mcpServers": {
"bigquery": {
"command": "uvx",
"args": [
"mcp-server-bigquery",
"--project",
"{{GCP_PROJECT_ID}}",
"--location",
"{{GCP_LOCATION}}"
]
}
}
Replace {{PATH_TO_REPO}}
, {{GCP_PROJECT_ID}}
, and {{GCP_LOCATION}}
with the appropriate values.
To prepare the package for distribution:
uv sync
uv build
This will create source and wheel distributions in the dist/
directory.
uv publish
Note: You'll need to set PyPI credentials via environment variables or command flags:
--token
or UV_PUBLISH_TOKEN
--username
/UV_PUBLISH_USERNAME
and --password
/UV_PUBLISH_PASSWORD
Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.
You can launch the MCP Inspector via npm
with this command:
npx @modelcontextprotocol/inspector uv --directory {{PATH_TO_REPO}} run mcp-server-bigquery
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
A read-only MCP server for querying live data from Square using the CData JDBC Driver.
An MCP server for the Chroma embedding database, providing persistent, searchable working memory for AI-assisted development with features like automated context recall and codebase indexing.
Enables secure interaction with MySQL databases, allowing AI assistants to list tables, read data, and execute SQL queries through a controlled interface.
A read-only MCP server by CData for querying live Zoho Creator data using a JDBC driver.
Connects to and interacts with various database systems like SQLite, PostgreSQL, SQL Server, and MongoDB.
Provides real-time, structured access to League of Legends game data, including champions, items, abilities, game mechanics, and patch information.
Official MCP server for dbt (data build tool) providing integration with dbt Core/Cloud CLI, project metadata discovery, model information, and semantic layer querying capabilities.
Allows LLMs to directly interact with a YugabyteDB database.
Integrates with Odoo Accounting via XML-RPC, allowing AI tools to query and analyze account journal entries for auditing purposes.
A memory system for the Cursor code editor, providing persistent context awareness for Claude via a Turso database.