Enables AI assistants to interact with Azure Kubernetes Service (AKS) clusters.
The AKS-MCP is a Model Context Protocol (MCP) server that enables AI assistants to interact with Azure Kubernetes Service (AKS) clusters. It serves as a bridge between AI tools (like GitHub Copilot, Claude, and other MCP-compatible AI assistants) and AKS, translating natural language requests into AKS operations and returning the results in a format the AI tools can understand.
It allows AI tools to:
AKS-MCP connects to Azure using the Azure SDK and provides a set of tools that AI assistants can use to interact with AKS resources. It leverages the Model Context Protocol (MCP) to facilitate this communication, enabling AI tools to make API calls to Azure and interpret the responses.
AKS-MCP uses Azure CLI (az) for AKS operations. Azure CLI authentication is attempted in this order:
Service Principal (client secret): When AZURE_CLIENT_ID
, AZURE_CLIENT_SECRET
, AZURE_TENANT_ID
environment variables are present, a service principal login is performed using the following command: az login --service-principal -u CLIENT_ID -p CLIENT_SECRET --tenant TENANT_ID
Workload Identity (federated token): When AZURE_CLIENT_ID
, AZURE_TENANT_ID
, AZURE_FEDERATED_TOKEN_FILE
environment variables are present, a federated token login is performed using the following command: az login --service-principal -u CLIENT_ID --tenant TENANT_ID --federated-token TOKEN
User-assigned Managed Identity (managed identity client ID): When only AZURE_CLIENT_ID
environment variable is present, a user-assigned managed identity login is performed using the following command: az login --identity -u CLIENT_ID
System-assigned Managed Identity: When AZURE_MANAGED_IDENTITY
is set to system
, a system-assigned managed identity login is performed using the following command: az login --identity
Existing Login: When none of the above environment variables are set, AKS-MCP assumes you have already authenticated (for example, via az login
) and uses the existing session.
Optional subscription selection:
AZURE_SUBSCRIPTION_ID
is set, AKS-MCP will run az account set --subscription SUBSCRIPTION_ID
after login.Notes and security:
/var/run/secrets/azure/tokens/azure-identity-token
and is strictly validated; other paths are rejected.az account show --query id -o tsv
.Environment variables used:
AZURE_TENANT_ID
AZURE_CLIENT_ID
AZURE_CLIENT_SECRET
AZURE_FEDERATED_TOKEN_FILE
AZURE_SUBSCRIPTION_ID
AZURE_MANAGED_IDENTITY
(set to system
to opt into system-assigned managed identity)The AKS-MCP server provides consolidated tools for interacting with AKS clusters. Some tools will require read-write or admin permissions to run debugging pods on your cluster. To enable read-write or admin permissions for the AKS-MCP server, add the access level parameter to your MCP configuration file:
Ctrl+Shift+P
on Windows/Linux or Cmd+Shift+P
on macOS).For example:
"args": [
"--transport",
"stdio",
"--access-level",
"readwrite"
]
These tools have been designed to provide comprehensive functionality through unified interfaces:
Tool: az_aks_operations
Unified tool for managing Azure Kubernetes Service (AKS) clusters and related operations.
Available Operations:
Read-Only (all access levels):
show
: Show cluster detailslist
: List clusters in subscription/resource groupget-versions
: Get available Kubernetes versionscheck-network
: Perform outbound network connectivity checknodepool-list
: List node pools in clusternodepool-show
: Show node pool detailsaccount-list
: List Azure subscriptionsRead-Write (readwrite
/admin
access levels):
create
: Create new clusterdelete
: Delete clusterscale
: Scale cluster node countstart
: Start a stopped clusterstop
: Stop a running clusterupdate
: Update cluster configurationupgrade
: Upgrade Kubernetes versionnodepool-add
: Add node pool to clusternodepool-delete
: Delete node poolnodepool-scale
: Scale node poolnodepool-upgrade
: Upgrade node poolaccount-set
: Set active subscriptionlogin
: Azure authenticationAdmin-Only (admin
access level):
get-credentials
: Get cluster credentials for kubectl accessTool: az_network_resources
Unified tool for getting Azure network resource information used by AKS clusters.
Available Resource Types:
all
: Get information about all network resourcesvnet
: Virtual Network informationsubnet
: Subnet informationnsg
: Network Security Group informationroute_table
: Route Table informationload_balancer
: Load Balancer informationprivate_endpoint
: Private endpoint informationTool: az_monitoring
Unified tool for Azure monitoring and diagnostics operations for AKS clusters.
Available Operations:
metrics
: List metric values for resourcesresource_health
: Retrieve resource health events for AKS clustersapp_insights
: Execute KQL queries against Application Insights telemetry datadiagnostics
: Check if AKS cluster has diagnostic settings configuredcontrol_plane_logs
: Query AKS control plane logs with safety constraints
and time range validationTool: get_aks_vmss_info
Tool: az_compute_operations
Unified tool for managing Azure Virtual Machines (VMs) and Virtual Machine Scale Sets (VMSS) used by AKS.
Available Operations:
show
: Get details of a VM/VMSSlist
: List VMs/VMSS in subscription or resource groupget-instance-view
: Get runtime statusstart
: Start VMstop
: Stop VMrestart
: Restart VM/VMSS instancesreimage
: Reimage VMSS instances (VM not supported for reimage)Resource Types: vm
(single virtual machines), vmss
(virtual machine scale sets)
Tool: az_fleet
Comprehensive Azure Fleet management for multi-cluster scenarios.
Available Operations:
Supports both Azure Fleet management and Kubernetes ClusterResourcePlacement CRD operations.
Tool: list_detectors
Tool: run_detector
Tool: run_detectors_by_category
Tool: az_advisor_recommendation
Retrieve and manage Azure Advisor recommendations for AKS clusters.
Available Operations:
list
: List recommendations with filtering optionsreport
: Generate recommendation reportsNote: kubectl commands are available with all access levels. Additional tools
require explicit enablement via --additional-tools
kubectl Tools (Unified Interface):
Read-Only (all access levels):
kubectl_resources
: View resources (get, describe) - filtered to read-only operations in readonly modekubectl_diagnostics
: Debug and diagnose (logs, events, top, exec, cp)kubectl_cluster
: Cluster information (cluster-info, api-resources, api-versions, explain)kubectl_config
: Configuration management (diff, auth, config) - filtered to read-only operations in readonly modeRead-Write/Admin (readwrite
/admin
access levels):
kubectl_resources
: Full resource management (get, describe, create, delete, apply, patch, replace, cordon, uncordon, drain, taint)kubectl_workloads
: Workload lifecycle (run, expose, scale, autoscale, rollout)kubectl_metadata
: Metadata management (label, annotate, set)kubectl_config
: Full configuration management (diff, auth, certificate, config)Additional Tools (Optional):
helm
: Helm package manager (requires --additional-tools helm
)cilium
: Cilium CLI for eBPF networking (requires --additional-tools cilium
)Tool: inspektor_gadget_observability
Real-time observability tool for Azure Kubernetes Service (AKS) clusters using eBPF.
Available Actions:
deploy
: Deploy Inspektor Gadget to cluster (requires readwrite
/admin
access)undeploy
: Remove Inspektor Gadget from cluster (requires readwrite
/admin
access)is_deployed
: Check deployment statusrun
: Run one-shot gadgetsstart
: Start continuous gadgetsstop
: Stop running gadgetsget_results
: Retrieve gadget resultslist_gadgets
: List available gadgetsAvailable Gadgets:
observe_dns
: Monitor DNS requests and responsesobserve_tcp
: Monitor TCP connectionsobserve_file_open
: Monitor file system operationsobserve_process_execution
: Monitor process executionobserve_signal
: Monitor signal deliveryobserve_system_calls
: Monitor system callstop_file
: Top files by I/O operationstop_tcp
: Top TCP connections by trafficSet up Azure CLI and authenticate:
az login
The easiest way to get started with AKS-MCP is through the Azure Kubernetes Service Extension for VS Code.
Ctrl+Shift+X
on Windows/Linux or Cmd+Shift+X
on macOS).Ctrl+Shift+P
on Windows/Linux or Cmd+Shift+P
on macOS).Upon successful installation, the server will now be visible in MCP: List Servers (via Command Palette). From there, you can start the MCP server or view its status.
Once started, the MCP server will appear in the Copilot Chat: Configure Tools dropdown under MCP Server: AKS MCP
, ready to enhance contextual prompts based on your AKS environment. By default, all AKS-MCP server tools are enabled. You can review the list of available tools and disable any that are not required for your specific scenario.
Try a prompt like "List all my AKS clusters", which will start using tools from the AKS-MCP server.
The MCP configuration differs depending on whether VS Code is running on Windows or inside WSL:
πͺ Windows Host (VS Code on Windows): Use "command": "wsl"
to invoke the WSL binary from Windows:
{
"servers": {
"aks-mcp": {
"type": "stdio",
"command": "wsl",
"args": [
"--",
"/home/you/.vs-kubernetes/tools/aks-mcp/aks-mcp",
"--transport",
"stdio"
]
}
}
}
π§ Remote-WSL (VS Code running inside WSL): Call the binary directly or use a shell wrapper:
{
"servers": {
"aks-mcp": {
"type": "stdio",
"command": "bash",
"args": [
"-c",
"/home/you/.vs-kubernetes/tools/aks-mcp/aks-mcp --transport stdio"
]
}
}
}
π§ Troubleshooting ENOENT Errors
If you see "spawn ENOENT" errors, verify your VS Code environment:
wsl -- ls /path/to/aks-mcp
"command": "wsl"
- use direct paths or bash wrapper as shown aboveπ‘ Benefits: The AKS extension handles binary downloads, updates, and configuration automatically, ensuring you always have the latest version with optimal settings.
Choose your platform and download the latest AKS-MCP binary:
Platform | Architecture | Download Link |
---|---|---|
Windows | AMD64 | π₯ aks-mcp-windows-amd64.exe |
ARM64 | π₯ aks-mcp-windows-arm64.exe | |
macOS | Intel (AMD64) | π₯ aks-mcp-darwin-amd64 |
Apple Silicon (ARM64) | π₯ aks-mcp-darwin-arm64 | |
Linux | AMD64 | π₯ aks-mcp-linux-amd64 |
ARM64 | π₯ aks-mcp-linux-arm64 |
After downloading, create a .vscode/mcp.json
file in your workspace root with the path to your downloaded binary.
For quick setup, you can use these one-liner scripts that download the binary and create the configuration:
Windows (PowerShell):
# Download binary and create VS Code configuration
mkdir -p .vscode ; Invoke-WebRequest -Uri "https://github.com/Azure/aks-mcp/releases/latest/download/aks-mcp-windows-amd64.exe" -OutFile "aks-mcp.exe" ; @{servers=@{"aks-mcp-server"=@{type="stdio";command="$PWD\aks-mcp.exe";args=@("--transport","stdio")}}} | ConvertTo-Json -Depth 3 | Out-File ".vscode/mcp.json" -Encoding UTF8
macOS/Linux (Bash):
# Download binary and create VS Code configuration
mkdir -p .vscode && curl -sL https://github.com/Azure/aks-mcp/releases/latest/download/aks-mcp-linux-amd64 -o aks-mcp && chmod +x aks-mcp && echo '{"servers":{"aks-mcp-server":{"type":"stdio","command":"'$PWD'/aks-mcp","args":["--transport","stdio"]}}}' > .vscode/mcp.json
β¨ Simple Setup: Download the binary for your platform, then use the manual configuration below to set up the MCP server in VS Code.
You can configure the AKS-MCP server in two ways:
1. Workspace-specific configuration (recommended for project-specific usage):
Create a .vscode/mcp.json
file in your workspace with the path to your downloaded binary:
{
"servers": {
"aks-mcp-server": {
"type": "stdio",
"command": "<enter the file path>",
"args": [
"--transport", "stdio"
]
}
}
}
2. User-level configuration (persistent across all workspaces):
For a persistent configuration that works across all your VS Code workspaces, add the MCP server to your VS Code user settings:
{
"github.copilot.chat.mcp.servers": {
"aks-mcp-server": {
"type": "stdio",
"command": "<enter the file path>",
"args": [
"--transport", "stdio"
]
}
}
}
π‘ Tip: If you don't see the AKS-MCP tools after restarting, check the VS Code output panel for any MCP server connection errors and verify your binary path in
.vscode/mcp.json
.
Note: Ensure you have authenticated with Azure CLI (az login
) for the server to access your Azure resources.
For other MCP-compatible AI clients like Claude Desktop, configure the server in your MCP configuration:
{
"mcpServers": {
"aks": {
"command": "<path of binary aks-mcp>",
"args": [
"--transport", "stdio"
]
}
}
}
You can enable the AKS-MCP server directly from MCP Toolkit:
[REQUIRED]
: Path to your Azure credentials directory e.g /home/user/.azure
(must be absolute β without $HOME
or ~
)[REQUIRED]
: Path to your kubeconfig file e.g /home/user/.kube/config
(must be absolute β without $HOME
or ~
)[REQUIRED]
: Set to readonly
, readwrite
, or admin
as needed[OPTIONAL]
: Username or UID to run the container as (default is mcp
), e.g. use 1000
to match your host user ID (see note below). Only needed if you are using docker engine on Linux.>= v0.16.0
for MCP gateway)Note: When running the MCP gateway using Docker Engine, you have to set the
container_user
to match your host user ID (e.g usingid -u
) to ensure proper file permissions for accessing mounted volumes. On Docker Desktop, this is handled automatically if you usedesktop-*
contexts, confirmed by runningdocker context ls
.
On Windows, the Azure credentials won't work by default, but you have two options:
Long-lived servers: Configure the MCP gateway to use long-lived servers using --long-lived
flag and then authenticate with Azure CLI in the container, see option B in Containerized MCP configuration below on how to fetch credentials inside the container.
Custom Azure Directory: Set up a custom Azure directory:
# Set custom Azure config directory
$env:AZURE_CONFIG_DIR = "$env:USERPROFILE\.azure-for-docker"
# Disable token cache encryption (to match behavior with Linux/macOS)
$env:AZURE_CORE_ENCRYPT_TOKEN_CACHE = "false"
# Login to Azure CLI
az login
This will store the credentials in $env:USERPROFILE\.azure-for-docker
(e.g. C:\Users\<username>\.azure-for-docker
),
use this path in the AKS-MCP server configuration azure_dir
.
You can also use the MCP Gateway to enable the AKS-MCP server directly using:
# Enable AKS-MCP server in Docker MCP Gateway
docker mcp server enable aks
Note: You still need to configure the server (e.g. using docker mcp config
) with your Azure credentials, kubeconfig file, and access level.
For containerized deployment, you can run AKS-MCP server using the official Docker image:
Option A: Mount credentials from host (recommended):
{
"mcpServers": {
"aks": {
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--user",
"<your-user-id (e.g. id -u)>",
"-v",
"~/.azure:/home/mcp/.azure",
"-v",
"~/.kube:/home/mcp/.kube",
"ghcr.io/azure/aks-mcp:latest",
"--transport",
"stdio"
]
}
}
}
Option B: fetch the credentials inside the container:
{
"mcpServers": {
"aks": {
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"ghcr.io/azure/aks-mcp:latest",
"--transport",
"stdio"
]
}
}
}
Start the MCP server container first per above command, and then run the following commands to fetch the credentials:
docker exec -it <container-id> az login --use-device-code
docker exec -it <container-id> az aks get-credentials -g <resource-group> -n <cluster-name>
Note that:
~/.azure
.You can configure any MCP-compatible client to use the AKS-MCP server by running the binary directly:
# Run the server directly
./aks-mcp --transport stdio
For direct binary usage without package managers:
chmod +x aks-mcp
Command line arguments:
Usage of ./aks-mcp:
--access-level string Access level (readonly, readwrite, admin) (default "readonly")
--additional-tools string Comma-separated list of additional Kubernetes tools to support (kubectl is always enabled). Available: helm,cilium,hubble
--allow-namespaces string Comma-separated list of allowed Kubernetes namespaces (empty means all namespaces)
--host string Host to listen for the server (only used with transport sse or streamable-http) (default "127.0.0.1")
--otlp-endpoint string OTLP endpoint for OpenTelemetry traces (e.g. localhost:4317, default "")
--port int Port to listen for the server (only used with transport sse or streamable-http) (default 8000)
--timeout int Timeout for command execution in seconds, default is 600s (default 600)
--transport string Transport mechanism to use (stdio, sse or streamable-http) (default "stdio")
-v, --verbose Enable verbose logging
Environment variables:
AZURE_TENANT_ID
, AZURE_CLIENT_ID
, AZURE_CLIENT_SECRET
, AZURE_SUBSCRIPTION_ID
)1.24.x
installed on your local machine/usr/bin/env bash
(Makefile targets use multi-line recipes with fail-fast mode)4.x
or laterNote: If your login shell is different (e.g.,
zsh
on macOS), you do not need to change it β the Makefile sets variables to run all recipes inbash
for consistent behavior across platforms.
This project includes a Makefile for convenient development, building, and testing. To see all available targets:
make help
# Build the binary
make build
# Run tests
make test
# Run tests with coverage
make test-coverage
# Format and lint code
make check
# Build for all platforms
make release
# Install dependencies
make deps
# Build and run with --help
make run
# Clean build artifacts
make clean
# Install binary to GOBIN
make install
# Build Docker image
make docker-build
# Run Docker container
make docker-run
If you prefer to build without the Makefile:
go build -o aks-mcp ./cmd/aks-mcp
Ask any questions about your AKS clusters in your AI client, for example:
List all my AKS clusters in my subscription xxx.
What is the network configuration of my AKS cluster?
Show me the network security groups associated with my cluster.
Create a new Azure Fleet named prod-fleet in eastus region.
List all members in my fleet.
Create a placement to deploy nginx workloads to clusters with app=frontend label.
Show me all ClusterResourcePlacements in my fleet.
Telemetry collection is on by default.
To opt out, set the environment variable AKS_MCP_COLLECT_TELEMETRY=false
.
We welcome contributions to AKS-MCP! Whether you're fixing bugs, adding features, or improving documentation, your help makes this project better.
π Read our detailed Contributing Guide for comprehensive information on:
make deps && make build
make test
and make check
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
Administer Google Workspace using the GAM command-line tool.
A remote MCP server deployable on Cloudflare Workers without authentication.
Access the KiotViet API, a popular sales and inventory management platform. Manage products, categories, customers, and orders with automatic authentication.
Interact with the Brex API to manage financial data and resources.
Interact with your content on the Contentful platform
Expose LlamaCloud services as MCP tools for building and managing LLM applications.
An MCP server for Zuora, powered by the CData JDBC Driver. Requires a separate driver and configuration file for connection.
A server to interact with the Uyuni Server API for infrastructure and configuration management.
Interact with the Alpaca trading API for stock trading, account management, and market data using LLMs.
Manage Google Cloud Platform (GCP) infrastructure using Terragrunt, with support for experimental features like AutoDevOps and cost management.