An MCP server that provides tools to interact with Powerdrill datasets, enabling smart AI data analysis and insights.
A Model Context Protocol (MCP) server that provides tools to interact with Powerdrill datasets, authenticated with Powerdrill User ID and Project API Key.
Please go to https://powerdrill.ai/ for AI data analysis individually or use with your Team.
If you have the Powerdrill User ID and Project API Key of your Team, you can manipulate the data via Powerdrill open sourced web clients:
To install powerdrill-mcp for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @powerdrillai/powerdrill-mcp --client claude
# Install globally
npm install -g @powerdrillai/powerdrill-mcp
# Or run directly with npx
npx @powerdrillai/powerdrill-mcp
Clone this repository and install dependencies:
git clone https://github.com/yourusername/powerdrill-mcp.git
cd powerdrill-mcp
npm install
If installed globally:
# Start the MCP server
powerdrill-mcp
If using npx:
# Run the latest version
npx -y @powerdrillai/powerdrill-mcp@latest
You'll need to configure environment variables with your Powerdrill credentials before running:
# Set environment variables
export POWERDRILL_USER_ID="your_user_id"
export POWERDRILL_PROJECT_API_KEY="your_project_api_key"
Or create a .env
file with these values.
To use this MCP server, you'll need a Powerdrill account with valid API credentials (User ID and API Key). Here's how to obtain them:
First, watch this video tutorial on how to create your Powerdrill Team:
Then, follow this video tutorial for setting up your API credentials:
The easiest way to set up the server is using the provided setup script:
# Make the script executable
chmod +x setup.sh
# Run the setup script
./setup.sh
This will:
.env
file if it doesn't existThen edit your .env
file with your actual credentials:
POWERDRILL_USER_ID=your_actual_user_id
POWERDRILL_PROJECT_API_KEY=your_actual_project_api_key
Also update the credentials in the generated configuration files before using them.
If you prefer to set up manually:
# Install dependencies
npm install
# Build the TypeScript code
npm run build
# Copy the environment example file
cp .env.example .env
# Edit the .env file with your credentials
npm start
{
"powerdrill": {
"command": "npx",
"args": [
"-y",
"@powerdrillai/powerdrill-mcp@latest"
],
"env": {
"POWERDRILL_USER_ID": "your_actual_user_id",
"POWERDRILL_PROJECT_API_KEY": "your_actual_project_api_key"
}
}
}
{
"powerdrill": {
"command": "node",
"args": ["/path/to/powerdrill-mcp/dist/index.js"],
"env": {
"POWERDRILL_USER_ID": "your_actual_user_id",
"POWERDRILL_PROJECT_API_KEY": "your_actual_project_api_key"
}
}
}
{
"powerdrill": {
"command": "npx",
"args": [
"-y",
"@powerdrillai/powerdrill-mcp@latest"
],
"env": {
"POWERDRILL_USER_ID": "your_actual_user_id",
"POWERDRILL_PROJECT_API_KEY": "your_actual_project_api_key"
}
}
}
{
"powerdrill": {
"command": "node",
"args": ["/path/to/powerdrill-mcp/dist/index.js"],
"env": {
"POWERDRILL_USER_ID": "your_actual_user_id",
"POWERDRILL_PROJECT_API_KEY": "your_actual_project_api_key"
}
}
}
Once connected, you can use the Powerdrill tools in your conversations with Claude Desktop, Cursor, Cline, Windsurf, etc.:
What datasets are available in my Powerdrill account?
or Show me all my datasets
Create a new dataset called "Sales Analytics"
or Make a new dataset named "Customer Data" with description "Customer information for 2024 analysis"
Upload the file /Users/your_name/Downloads/sales_data.csv to dataset {dataset_id}
or Add my local file /path/to/customer_data.xlsx to my {dataset_id} dataset
Tell me more about this dataset: {dataset_id}
or Describe the structure of dataset {dataset_id}
Analyze dataset {dataset_id} with this question: "How has the trend changed over time?"
or Run a query on {dataset_id} asking "What are the top 10 customers by revenue?"
Create a new session named "Sales Analysis 2024" for my data analysis
or Start a session called "Customer Segmentation" for analyzing market data
What data sources are available in dataset {dataset_id}?
or Show me all files in the {dataset_id} dataset
Show me all my current analysis sessions
or List my recent data analysis sessions
Lists available datasets from your Powerdrill account.
Parameters:
limit
(optional): Maximum number of datasets to returnExample response:
{
"datasets": [
{
"id": "dataset-dasfadsgadsgas",
"name": "mydata",
"description": "my dataset"
}
]
}
Gets detailed overview information about a specific dataset.
Parameters:
datasetId
(required): The ID of the dataset to get overview information forExample response:
{
"id": "dset-cm5axptyyxxx298",
"name": "sales_indicators_2024",
"description": "A dataset comprising 373 travel bookings with 15 attributes...",
"summary": "This dataset contains 373 travel bookings with 15 attributes...",
"exploration_questions": [
"How does the booking price trend over time based on the BookingTimestamp?",
"How does the average booking price change with respect to the TravelDate?"
],
"keywords": [
"Travel Bookings",
"Booking Trends",
"Travel Agencies"
]
}
Creates a job to analyze data with natural language questions.
Parameters:
question
(required): The natural language question or prompt to analyze the datadataset_id
(required): The ID of the dataset to analyzedatasource_ids
(optional): Array of specific data source IDs within the dataset to analyzesession_id
(optional): Session ID to group related jobsstream
(optional, default: false): Whether to stream the resultsoutput_language
(optional, default: "AUTO"): The language for the outputjob_mode
(optional, default: "AUTO"): The job modeExample response:
{
"job_id": "job-cm3ikdeuj02zk01l1yeuirt77",
"blocks": [
{
"type": "CODE",
"content": "```python\nimport pandas as pd\n\ndef invoke(input_0: pd.DataFrame) -> pd.DataFrame:\n...",
"stage": "Analyze"
},
{
"type": "TABLE",
"url": "https://static.powerdrill.ai/tmp_datasource_cache/code_result/...",
"name": "trend_data.csv",
"expires_at": "2024-11-21T09:56:34.290544Z"
},
{
"type": "IMAGE",
"url": "https://static.powerdrill.ai/tmp_datasource_cache/code_result/...",
"name": "Trend of Deaths from Natural Disasters Over the Century",
"expires_at": "2024-11-21T09:56:34.290544Z"
},
{
"type": "MESSAGE",
"content": "Analysis of Trends in the Number of Deaths from Natural Disasters...",
"stage": "Respond"
}
]
}
Creates a new session to group related jobs together.
Parameters:
name
(required): The session name, which can be up to 128 characters in lengthoutput_language
(optional, default: "AUTO"): The language in which the output is generated. Options include: "AUTO", "EN", "ES", "AR", "PT", "ID", "JA", "RU", "HI", "FR", "DE", "VI", "TR", "PL", "IT", "KO", "ZH-CN", "ZH-TW"job_mode
(optional, default: "AUTO"): Job mode for the session. Options include: "AUTO", "DATA_ANALYTICS"max_contextual_job_history
(optional, default: 10): The maximum number of recent jobs retained as context for the next job (0-10)agent_id
(optional, default: "DATA_ANALYSIS_AGENT"): The ID of the agentExample response:
{
"session_id": "session-abcdefghijklmnopqrstuvwxyz"
}
Lists data sources in a specific dataset.
Parameters:
datasetId
(required): The ID of the dataset to list data sources frompageNumber
(optional, default: 1): The page number to start listingpageSize
(optional, default: 10): The number of items on a single pagestatus
(optional): Filter data sources by status: synching, invalid, synched (comma-separated for multiple)Example response:
{
"count": 3,
"total": 5,
"page": 1,
"page_size": 10,
"data_sources": [
{
"id": "dsource-a1b2c3d4e5f6g7h8i9j0",
"name": "sales_data.csv",
"type": "CSV",
"status": "synched",
"size": 1048576,
"dataset_id": "dset-cm5axptyyxxx298"
},
{
"id": "dsource-b2c3d4e5f6g7h8i9j0k1",
"name": "customer_info.xlsx",
"type": "EXCEL",
"status": "synched",
"size": 2097152,
"dataset_id": "dset-cm5axptyyxxx298"
},
{
"id": "dsource-c3d4e5f6g7h8i9j0k1l2",
"name": "market_research.pdf",
"type": "PDF",
"status": "synched",
"size": 3145728,
"dataset_id": "dset-cm5axptyyxxx298"
}
]
}
Lists sessions from your Powerdrill account.
Parameters:
pageNumber
(optional): The page number to start listing (default: 1)pageSize
(optional): The number of items on a single page (default: 10)search
(optional): Search for sessions by nameExample response:
{
"count": 2,
"total": 2,
"sessions": [
{
"id": "session-123abc",
"name": "Product Analysis",
"job_count": 3,
"created_at": "2024-03-15T10:30:00Z",
"updated_at": "2024-03-15T11:45:00Z"
},
{
"id": "session-456def",
"name": "Financial Forecasting",
"job_count": 5,
"created_at": "2024-03-10T14:20:00Z",
"updated_at": "2024-03-12T09:15:00Z"
}
]
}
Creates a new dataset in your Powerdrill account.
Parameters:
name
(required): The dataset name, which can be up to 128 characters in lengthdescription
(optional): The dataset description, which can be up to 128 characters in lengthExample response:
{
"id": "dataset-adsdfasafdsfasdgasd",
"message": "Dataset created successfully"
}
Creates a new data source by uploading a local file to a specified dataset.
Parameters:
dataset_id
(required): The ID of the dataset to create the data source infile_path
(required): The local path to the file to uploadfile_name
(optional): Custom name for the file, defaults to the original filenamechunk_size
(optional, default: 5MB): Size of each chunk in bytes for multipart uploadExample response:
{
"dataset_id": "dset-cm5axptyyxxx298",
"data_source": {
"id": "dsource-a1b2c3d4e5f6g7h8i9j0",
"name": "sales_data_2024.csv",
"type": "FILE",
"status": "synched",
"size": 2097152
},
"file": {
"name": "sales_data_2024.csv",
"size": 2097152,
"object_key": "uploads/user_123/sales_data_2024.csv"
}
}
If you encounter issues:
.env
npm start
MIT
Knowledge graph-based persistent memory system
Read-only database access with schema inspection
Database interaction and business intelligence capabilities
Embeddings, vector search, document storage, and full-text search with the open-source AI application database
Query your ClickHouse database server.
Immutable ledger database with live synchronization
Provides AI assistants with a secure and structured way to explore and analyze data in GreptimeDB.
Connect to a Hologres instance, get table metadata, query and analyze data.
Open source MCP server specializing in easy, fast, and secure tools for Databases.
Search, Query and interact with data in your Milvus Vector Database.