Homelab MCP
MCP servers for managing homelab infrastructure through Claude Desktop. Monitor Docker/Podman containers, Ollama AI models, Pi-hole DNS, Unifi networks, and Ansible inventory.
Homelab MCP Servers
Model Context Protocol (MCP) servers for managing homelab infrastructure through Claude Desktop.
A collection of Model Context Protocol (MCP) servers for managing and monitoring your homelab infrastructure through Claude Desktop.
š Security Notice
ā ļø IMPORTANT: Please read SECURITY.md before deploying this project.
This project interacts with critical infrastructure (Docker APIs, DNS, network devices). Improper configuration can expose your homelab to security risks.
Key Security Requirements:
- NEVER expose Docker/Podman APIs to the internet - Use firewall rules to restrict access
- Keep
.envfile secure - Contains API keys and should never be committed - Use unique API keys - Generate separate keys for each service
- Review network security - Ensure proper VLAN segmentation and firewall rules
See SECURITY.md for comprehensive security guidance.
ļæ½ Documentation Overview
This project includes several documentation files for different audiences:
- README.md (this file) - Installation, setup, and usage guide
- MIGRATION.md - Migration guide for v2.0 unified server
- PROJECT_INSTRUCTIONS.md - Copy into Claude project instructions for AI context
- CLAUDE.md - Developer guide for AI assistants and contributors
- SECURITY.md - Security policies and best practices
- CONTRIBUTING.md - How to contribute to this project
- CHANGELOG.md - Version history and changes
š„ For End Users: Follow this README + copy PROJECT_INSTRUCTIONS.md to Claude š Migrating from v1.x? See MIGRATION.md for unified server migration š¤ For AI Assistants: Read CLAUDE.md for complete development context š§ For Contributors: Start with CONTRIBUTING.md and CLAUDE.md
ļæ½š Important: Configure Claude Project Instructions
After setting up the MCP servers, create your personalized project instructions:
-
Copy the example templates:
# Windows copy PROJECT_INSTRUCTIONS.example.md PROJECT_INSTRUCTIONS.md copy CLAUDE.example.md CLAUDE.md # Linux/Mac cp PROJECT_INSTRUCTIONS.example.md PROJECT_INSTRUCTIONS.md cp CLAUDE.example.md CLAUDE.md -
Edit both files with your actual infrastructure details:
PROJECT_INSTRUCTIONS.md (for Claude Desktop project instructions):
- Replace example IP addresses with your real network addresses
- Add your actual server hostnames
- Customize with your specific services and configurations
- Keep this file private - it contains your network topology
CLAUDE.md (for AI development work - contributors only):
- Update repository URLs with your actual GitHub repository
- Add your Notion workspace URLs if using task management
- Customize infrastructure references
- Keep this file private - contains your specific URLs and setup
-
Add to Claude Desktop:
- Open Claude Desktop
- Go to your project settings
- Copy the contents of your customized
PROJECT_INSTRUCTIONS.md - Paste into the "Project instructions" field
What's included:
- Detailed MCP server capabilities and usage patterns
- Infrastructure overview and monitoring capabilities
- Specific commands and tools available for each service
- Troubleshooting and development guidance
This README covers installation and basic setup. The project instructions provide Claude with comprehensive usage context.
šÆ Deployment Options
Version 2.0+ offers two deployment modes:
Unified Server (Recommended for New Deployments)
Run all MCP servers in a single process with namespaced tools:
{
"mcpServers": {
"homelab-unified": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\homelab_unified_mcp.py"]
}
}
}
Advantages:
- ā Single configuration entry
- ā One Python process for all servers
- ā Better Docker deployment
- ā Cleaner logs (no duplicate warnings)
- ā
All tools namespaced (e.g.,
docker_get_containers,ping_ping_host)
Individual Servers (Legacy, Fully Supported)
Run each MCP server as a separate process:
{
"mcpServers": {
"docker": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\docker_mcp_podman.py"]
},
"ollama": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\ollama_mcp.py"]
}
}
}
Advantages:
- ā Granular control over each server
- ā Can enable/disable servers individually
- ā
Original tool names (e.g.,
get_docker_containers,ping_host) - ā Backward compatible with v1.x
Migration Guide: See MIGRATION.md for detailed migration instructions and tool name changes.
š Quick Start
1. Clone the repository
git clone https://github.com/bjeans/homelab-mcp
cd homelab-mcp
2. Install security checks (recommended)
# Install pre-push git hook for automatic security validation
python helpers/install_git_hook.py
3. Set up configuration files
Environment variables:
# Windows
copy .env.example .env
# Linux/Mac
cp .env.example .env
Edit .env with your actual values:
# Windows
notepad .env
# Linux/Mac
nano .env
Ansible inventory (if using):
# Windows
copy ansible_hosts.example.yml ansible_hosts.yml
# Linux/Mac
cp ansible_hosts.example.yml ansible_hosts.yml
Edit with your infrastructure details.
Project instructions:
# Windows
copy PROJECT_INSTRUCTIONS.example.md PROJECT_INSTRUCTIONS.md
# Linux/Mac
cp PROJECT_INSTRUCTIONS.example.md PROJECT_INSTRUCTIONS.md
Customize with your network topology and servers.
AI development guide (for contributors):
# Windows
copy CLAUDE.example.md CLAUDE.md
# Linux/Mac
cp CLAUDE.example.md CLAUDE.md
Update with your repository URLs, Notion workspace, and infrastructure details.
4. Install Python dependencies
pip install -r requirements.txt
5. Add to Claude Desktop config
Config file location:
- Windows:
%APPDATA%\Claude\claude_desktop_config.json - macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Option A: Unified Server (Recommended)
Single entry for all homelab servers:
{
"mcpServers": {
"homelab-unified": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\homelab_unified_mcp.py"]
},
"mcp-registry-inspector": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\mcp_registry_inspector.py"]
},
"ansible-inventory": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\ansible_mcp_server.py"]
}
}
}
Option B: Individual Servers (Legacy)
Separate entry for each server:
{
"mcpServers": {
"mcp-registry-inspector": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\mcp_registry_inspector.py"]
},
"docker": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\docker_mcp_podman.py"]
},
"ollama": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\ollama_mcp.py"]
},
"pihole": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\pihole_mcp.py"]
},
"unifi": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\unifi_mcp_optimized.py"]
},
"ping": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\ping_mcp_server.py"]
},
"ups-monitor": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\ups_mcp_server.py"]
},
"ansible-inventory": {
"command": "python",
"args": ["C:\\Path\\To\\Homelab-MCP\\ansible_mcp_server.py"]
}
}
}
Note: Tool names differ between modes. See MIGRATION.md for details.
6. Restart Claude Desktop
7. Add project instructions to Claude
- Copy the contents of your customized
PROJECT_INSTRUCTIONS.md - Paste into your Claude project's "Project instructions" field
- This gives Claude comprehensive context about your MCP capabilities
š³ Docker Deployment (Alternative)
Run the MCP servers in Docker containers for easier distribution and isolation.
Quick Start with Docker
Unified Mode (Recommended) - All servers in one container:
# Clone and navigate to repository
git clone https://github.com/bjeans/homelab-mcp
cd homelab-mcp
# Build the image
docker build -t homelab-mcp:latest .
# Run with Docker Compose (recommended)
docker-compose up -d
# Or run unified server directly
docker run -d \
--name homelab-mcp \
--network host \
-v $(pwd)/ansible_hosts.yml:/config/ansible_hosts.yml:ro \
homelab-mcp:latest
Legacy Mode - Individual servers (set ENABLED_SERVERS):
docker run -d \
--name homelab-mcp-docker \
--network host \
-e ENABLED_SERVERS=docker \
-v $(pwd)/ansible_hosts.yml:/config/ansible_hosts.yml:ro \
homelab-mcp:latest
Available Servers
Unified Mode (Default):
- ā All 5 servers in one process: Docker, Ping, Ollama, Pi-hole, Unifi
- ā
Namespaced tools (e.g.,
docker_get_containers) - ā Single configuration entry
Legacy Mode (Set ENABLED_SERVERS):
- ā
docker- Docker/Podman container management - ā
ping- Network ping utilities - ā
ollama- Ollama AI model management - ā
pihole- Pi-hole DNS statistics - ā
unifi- Unifi network device monitoring
Docker Configuration
Two configuration methods supported:
- Ansible Inventory (Recommended) - Mount as volume
- Environment Variables - Pass via Docker
-eflags
See DOCKER.md for comprehensive Docker deployment guide including:
- Detailed setup instructions
- Network configuration options
- Security best practices
- Claude Desktop integration
- Troubleshooting common issues
Integration with Claude Desktop
Unified Mode (Recommended):
{
"mcpServers": {
"homelab-unified": {
"command": "docker",
"args": ["exec", "-i", "homelab-mcp", "python", "homelab_unified_mcp.py"]
}
}
}
Legacy Mode (Individual Servers):
{
"mcpServers": {
"homelab-docker": {
"command": "docker",
"args": ["exec", "-i", "homelab-mcp-docker", "python", "docker_mcp_podman.py"]
},
"homelab-ping": {
"command": "docker",
"args": ["exec", "-i", "homelab-mcp-ping", "python", "ping_mcp_server.py"]
}
}
}
Important: Use docker exec -i (not -it) for proper MCP stdio communication.
Testing Docker Containers
Quick verification test (using environment variables - marketplace ready):
# Test Ping Server
docker run --rm --network host \
-e ENABLED_SERVERS=ping \
-e PING_TARGET1=8.8.8.8 \
-e PING_TARGET1_NAME=Google-DNS \
homelab-mcp:latest
# Test Docker Server
docker run --rm --network host \
-e ENABLED_SERVERS=docker \
-e DOCKER_SERVER1_ENDPOINT=localhost:2375 \
-e DOCKER_SERVER1_NAME=Local-Docker \
homelab-mcp:latest
Docker Compose testing:
docker-compose up -d
docker-compose logs -f
For comprehensive testing and configuration options, see DOCKER.md - Testing Section.
š¦ Available MCP Servers
MCP Registry Inspector
Provides introspection into your MCP development environment.
Tools:
get_claude_config- View Claude Desktop MCP configurationlist_mcp_servers- List all registered MCP serverslist_mcp_directory- Browse MCP development directoryread_mcp_file- Read MCP server source codewrite_mcp_file- Write/update MCP server filessearch_mcp_files- Search for files by name
Configuration:
MCP_DIRECTORY=/path/to/your/Homelab-MCP
CLAUDE_CONFIG_PATH=/path/to/claude_desktop_config.json # Optional
Docker/Podman Container Manager
Manage Docker and Podman containers across multiple hosts.
š Security Warning: Docker/Podman APIs typically use unencrypted HTTP without authentication. See SECURITY.md for required firewall configuration.
Tools:
Individual server mode:
get_docker_containers- Get containers on a specific hostget_all_containers- Get all containers across all hostsget_container_stats- Get CPU and memory statscheck_container- Check if a specific container is runningfind_containers_by_label- Find containers by labelget_container_labels- Get all labels for a container
Unified server mode (namespaced):
docker_get_containers- Get containers on a specific hostdocker_get_all_containers- Get all containers across all hostsdocker_get_container_stats- Get CPU and memory statsdocker_check_container- Check if a specific container is runningdocker_find_containers_by_label- Find containers by labeldocker_get_container_labels- Get all labels for a container
Configuration Options:
Option 1: Using Ansible Inventory (Recommended)
ANSIBLE_INVENTORY_PATH=/path/to/ansible_hosts.yml
Option 2: Using Environment Variables
DOCKER_SERVER1_ENDPOINT=192.168.1.100:2375
DOCKER_SERVER2_ENDPOINT=192.168.1.101:2375
PODMAN_SERVER1_ENDPOINT=192.168.1.102:8080
Ollama AI Model Manager
Monitor and manage Ollama AI model instances across your homelab, plus check your LiteLLM proxy for unified API access.
What's Included
Ollama Monitoring:
- Track multiple Ollama instances across different hosts
- View available models and their sizes
- Check instance health and availability
LiteLLM Proxy Integration:
- LiteLLM provides a unified OpenAI-compatible API across all your Ollama instances
- Enables load balancing and failover between multiple Ollama servers
- Allows you to use OpenAI client libraries with your local models
- The MCP server can verify your LiteLLM proxy is online and responding
Why use LiteLLM?
- Load Balancing: Automatically distributes requests across multiple Ollama instances
- Failover: If one Ollama server is down, requests route to healthy servers
- OpenAI Compatibility: Use any OpenAI SDK/library with your local models
- Centralized Access: Single endpoint (e.g.,
http://192.0.2.10:4000) for all models - Usage Tracking: Monitor which models are being used most
Tools:
get_ollama_status- Check status of all Ollama instances and model countsget_ollama_models- Get detailed model list for a specific hostget_litellm_status- Verify LiteLLM proxy is online and responding
Configuration Options:
Option 1: Using Ansible Inventory (Recommended)
ANSIBLE_INVENTORY_PATH=/path/to/ansible_hosts.yml
OLLAMA_PORT=11434 # Default Ollama port
# Ansible inventory group name (default: ollama_servers)
# Change this if you use a different group name in your ansible_hosts.yml
OLLAMA_INVENTORY_GROUP=ollama_servers
# LiteLLM Configuration
LITELLM_HOST=192.168.1.100 # Host running LiteLLM proxy
LITELLM_PORT=4000 # LiteLLM proxy port (default: 4000)
Option 2: Using Environment Variables
# Ollama Instances
OLLAMA_SERVER1=192.168.1.100
OLLAMA_SERVER2=192.168.1.101
OLLAMA_WORKSTATION=192.168.1.150
# LiteLLM Proxy
LITELLM_HOST=192.168.1.100
LITELLM_PORT=4000
Setting Up LiteLLM (Optional):
If you want to use LiteLLM for unified access to your Ollama instances:
-
Install LiteLLM on one of your servers:
pip install litellm[proxy] -
Create configuration (
litellm_config.yaml):model_list: - model_name: llama3.2 litellm_params: model: ollama/llama3.2 api_base: http://server1:11434 - model_name: llama3.2 litellm_params: model: ollama/llama3.2 api_base: http://server2:11434 router_settings: routing_strategy: usage-based-routing -
Start LiteLLM proxy:
litellm --config litellm_config.yaml --port 4000 -
Use the MCP tool to verify it's running:
- In Claude: "Check my LiteLLM proxy status"
Example Usage:
- "What Ollama instances do I have running?"
- "Show me all models on my Dell-Server"
- "Is my LiteLLM proxy online?"
- "How many models are available across all servers?"
Pi-hole DNS Manager
Monitor Pi-hole DNS statistics and status.
š Security Note: Store Pi-hole API keys securely in .env file. Generate unique keys per instance.
Tools:
get_pihole_stats- Get DNS statistics from all Pi-hole instancesget_pihole_status- Check which Pi-hole instances are online
Configuration Options:
Option 1: Using Ansible Inventory (Recommended)
ANSIBLE_INVENTORY_PATH=/path/to/ansible_hosts.yml
# API keys still required in .env:
PIHOLE_API_KEY_SERVER1=your-api-key-here
PIHOLE_API_KEY_SERVER2=your-api-key-here
Option 2: Using Environment Variables
PIHOLE_API_KEY_SERVER1=your-api-key
PIHOLE_API_KEY_SERVER2=your-api-key
PIHOLE_SERVER1_HOST=pihole1.local
PIHOLE_SERVER1_PORT=80
PIHOLE_SERVER2_HOST=pihole2.local
PIHOLE_SERVER2_PORT=8053
Getting Pi-hole API Keys:
- Web UI: Settings ā API ā Show API Token
- Or generate new:
pihole -a -pon Pi-hole server
Unifi Network Monitor
Monitor Unifi network infrastructure and clients with caching for performance.
š Security Note: Use a dedicated API key with minimal required permissions.
Tools:
get_network_devices- Get all network devices (switches, APs, gateways)get_network_clients- Get all active network clientsget_network_summary- Get network overviewrefresh_network_data- Force refresh from controller (bypasses cache)
Configuration:
UNIFI_API_KEY=your-unifi-api-key
UNIFI_HOST=192.168.1.1
Note: Data is cached for 5 minutes to improve performance. Use refresh_network_data to force update.
Ansible Inventory Inspector
Query Ansible inventory information (read-only).
Tools:
get_all_hosts- Get all hosts in inventoryget_all_groups- Get all groupsget_host_details- Get detailed host informationget_group_details- Get detailed group informationget_hosts_by_group- Get hosts in specific groupsearch_hosts- Search hosts by pattern or variableget_inventory_summary- High-level inventory overviewreload_inventory- Reload inventory from disk
Configuration:
ANSIBLE_INVENTORY_PATH=/path/to/ansible_hosts.yml
Ping Network Connectivity Monitor
Test network connectivity and host availability using ICMP ping across your infrastructure.
Why use this?
- Quick health checks during outages or after power events
- Verify which hosts are reachable before querying service-specific MCPs
- Simple troubleshooting tool to identify network issues
- Baseline connectivity testing for your infrastructure
Tools:
ping_host- Ping a single host by name (resolved from Ansible inventory)ping_group- Ping all hosts in an Ansible group concurrentlyping_all- Ping all infrastructure hosts concurrentlylist_groups- List available Ansible groups for ping operations
Features:
- ā Cross-platform support - Works on Windows, Linux, and macOS
- ā Ansible integration - Automatically resolves hostnames/IPs from inventory
- ā Concurrent pings - Test multiple hosts simultaneously for faster results
- ā Detailed statistics - RTT min/avg/max, packet loss percentage
- ā Customizable - Configure timeout and packet count
- ā
No dependencies - Uses system
pingcommand (no extra libraries needed)
Configuration:
ANSIBLE_INVENTORY_PATH=/path/to/ansible_hosts.yml
# No additional API keys required!
Example Usage:
- "Ping server1.example.local"
- "Check connectivity to all Pi-hole servers"
- "Ping all Ubuntu_Server hosts"
- "Test connectivity to entire infrastructure"
- "What groups can I ping?"
When to use:
- After power outages - Quickly identify which hosts came back online
- Before service checks - Verify host is reachable before checking specific services
- Network troubleshooting - Isolate connectivity issues from service issues
- Health monitoring - Regular checks to ensure infrastructure availability
UPS Monitoring (Network UPS Tools)
Monitor UPS (Uninterruptible Power Supply) devices across your infrastructure using Network UPS Tools (NUT) protocol.
Why use this?
- Real-time visibility into power infrastructure status
- Proactive alerts before battery depletion during outages
- Monitor multiple UPS devices across different hosts
- Track battery health and runtime estimates
- Essential for critical infrastructure planning
Tools:
get_ups_status- Check status of all UPS devices across all NUT serversget_ups_details- Get detailed information for a specific UPS deviceget_battery_runtime- Get battery runtime estimates for all UPS devicesget_power_events- Check for recent power events (on battery, low battery)list_ups_devices- List all UPS devices configured in inventoryreload_inventory- Reload Ansible inventory after changes
Features:
- ā NUT protocol support - Uses Network UPS Tools standard protocol (port 3493)
- ā Ansible integration - Automatically discovers UPS from inventory
- ā Multiple UPS per host - Support for servers with multiple UPS devices
- ā Battery monitoring - Track charge level, runtime remaining, load percentage
- ā Power event detection - Identify when UPS switches to battery or low battery
- ā Cross-platform - Works with any NUT-compatible UPS (TrippLite, APC, CyberPower, etc.)
- ā Flexible auth - Optional username/password authentication
Configuration:
Option 1: Using Ansible Inventory (Recommended)
ANSIBLE_INVENTORY_PATH=/path/to/ansible_hosts.yml
# Default NUT port (optional, defaults to 3493)
NUT_PORT=3493
# NUT authentication (optional - only if your NUT server requires it)
NUT_USERNAME=monuser
NUT_PASSWORD=secret
Ansible inventory example:
nut_servers:
hosts:
dell-server.example.local:
ansible_host: 192.168.1.100
nut_port: 3493
ups_devices:
- name: tripplite
description: "TrippLite SMART1500LCDXL"
Option 2: Using Environment Variables
NUT_PORT=3493
NUT_USERNAME=monuser
NUT_PASSWORD=secret
Prerequisites:
-
Install NUT on servers with UPS devices:
# Debian/Ubuntu sudo apt install nut nut-client nut-server # RHEL/Rocky/CentOS sudo dnf install nut nut-client -
Configure NUT daemon (
/etc/nut/ups.conf):[tripplite] driver = usbhid-ups port = auto desc = "TrippLite SMART1500LCDXL" -
Enable network monitoring (
/etc/nut/upsd.conf):LISTEN 0.0.0.0 3493 -
Configure access (
/etc/nut/upsd.users):[monuser] password = secret upsmon master -
Start NUT services:
sudo systemctl enable nut-server nut-client sudo systemctl start nut-server nut-client
Example Usage:
- "What's the status of all my UPS devices?"
- "Show me battery runtime for the Dell server UPS"
- "Check for any power events"
- "Get detailed info about the TrippLite UPS"
- "List all configured UPS devices"
When to use:
- After power flickers - Verify UPS devices handled the event properly
- Before maintenance - Check battery levels and estimated runtime
- Regular monitoring - Track UPS health and battery condition
- Capacity planning - Understand how long systems can run on battery
Common UPS Status Codes:
OL- Online (normal operation, AC power present)OB- On Battery (power outage, running on battery)LB- Low Battery (critically low battery, shutdown imminent)CHRG- Charging (battery is charging)RB- Replace Battery (battery needs replacement)
š Security
Automated Security Checks
This project includes automated security validation to prevent accidental exposure of sensitive data:
Install the pre-push git hook (recommended):
# From project root
python helpers/install_git_hook.py
What it does:
- Automatically runs
helpers/pre_publish_check.pybefore every git push - Blocks pushes that contain potential secrets or sensitive data
- Protects against accidentally committing API keys, passwords, or personal information
Manual security check:
# Run security validation manually
python helpers/pre_publish_check.py
Bypass security check (use with extreme caution):
# Only when absolutely necessary
git push --no-verify
Critical Security Practices
Configuration Files:
- ā
DO use
.env.exampleas a template - ā
DO keep
.envfile permissions restrictive (chmod 600on Linux/Mac) - ā NEVER commit
.envto version control - ā NEVER commit
ansible_hosts.ymlwith real infrastructure - ā NEVER commit
PROJECT_INSTRUCTIONS.mdwith real network topology
API Security:
- ā DO use unique API keys for each service
- ā DO rotate API keys regularly (every 90 days recommended)
- ā DO use strong, randomly-generated keys (32+ characters)
- ā NEVER expose Docker/Podman APIs to the internet
- ā NEVER reuse API keys between environments
Network Security:
- ā DO use firewall rules to restrict API access
- ā DO implement VLAN segmentation
- ā DO enable TLS/HTTPS where possible
- ā NEVER expose management interfaces publicly
For detailed security guidance, see SECURITY.md
š Requirements
System Requirements
- Python: 3.10 or higher
- Claude Desktop: Latest version recommended
- Network Access: Connectivity to homelab services
Python Dependencies
Install via requirements.txt:
pip install -r requirements.txt
Core dependencies:
mcp- Model Context Protocol SDKaiohttp- Async HTTP clientpyyaml- YAML parsing for Ansible inventory
Service Requirements
- Docker/Podman: API enabled on monitored hosts
- Pi-hole: v6+ with API enabled
- Unifi Controller: API access enabled
- Ollama: Running instances with API accessible
- NUT (Network UPS Tools): Installed and configured on hosts with UPS devices
- Ansible: Inventory file (optional but recommended)
š» Compatibility
Tested Platforms
Developed and tested on:
- OS: Windows 11
- Claude Desktop: Version 0.13.64
- Python: Version 3.13.8
Cross-Platform Notes
Windows: Fully tested and supported ā macOS: Should work but untested ā ļø Linux: Should work but untested ā ļø
Known platform differences:
- File paths in documentation are Windows-style
- Path separators may need adjustment for Unix systems
.envfile permissions should be set on Unix (chmod 600 .env)
Contributions for other platforms welcome!
š ļø Development
š First time contributing? Read CLAUDE.md for complete development guidance including architecture patterns, security requirements, and AI assistant workflows.
Getting Started
-
Install security git hook (required for contributors):
python helpers/install_git_hook.py -
Set up development environment:
pip install -r requirements.txt cp .env.example .env # Edit .env with your test values
Testing MCP Servers Locally
Before submitting a PR, test your MCP server changes locally using the MCP Inspector tool.
Quick start:
# Install MCP Inspector (one time)
npm install -g @modelcontextprotocol/inspector
# Test your changes
npx @modelcontextprotocol/inspector uv --directory . run <server>_mcp.py
This opens a web-based debugger at http://localhost:5173 where you can:
- See all available tools for the MCP server
- Test each tool with sample arguments
- Verify responses are properly formatted
- Debug issues before submitting PRs
For detailed testing instructions, see the Testing MCP Servers Locally section in CONTRIBUTING.md.
Helper Scripts
The helpers/ directory contains utility scripts for development and deployment:
install_git_hook.py- Installs git pre-push hook for automatic security checkspre_publish_check.py- Security validation script (runs automatically via git hook)
Usage:
# Install security git hook
python helpers/install_git_hook.py
# Run security check manually
python helpers/pre_publish_check.py
Project Structure
Homelab-MCP/
āāā helpers/ # Utility and setup scripts
ā āāā install_git_hook.py # Git pre-push hook installer
ā āāā pre_publish_check.py # Security validation script
āāā .env.example # Template for environment variables
āāā .gitignore # Excludes sensitive files
āāā SECURITY.md # Security best practices
āāā README.md # This file
āāā CLAUDE.example.md # Example AI assistant guide (copy to CLAUDE.md)
āāā CONTRIBUTING.md # Contribution guidelines
āāā CHANGELOG.md # Version history
āāā requirements.txt # Python dependencies
āāā ansible_hosts.example.yml # Example Ansible inventory
āāā PROJECT_INSTRUCTIONS.example.md # Example Claude instructions
āāā ansible_mcp_server.py # Ansible inventory MCP
āāā docker_mcp_podman.py # Docker/Podman MCP
āāā ollama_mcp.py # Ollama AI MCP
āāā pihole_mcp.py # Pi-hole DNS MCP
āāā unifi_mcp_optimized.py # Unifi network MCP
āāā unifi_exporter.py # Unifi data export utility
āāā mcp_registry_inspector.py # MCP development tools
Adding a New MCP Server
-
Create the server file
#!/usr/bin/env python3 """ My Service MCP Server Description of what it does """ import asyncio from mcp.server import Server # ... implement tools ... -
Add configuration to
.env.example# My Service Configuration MY_SERVICE_HOST=192.168.1.100 MY_SERVICE_API_KEY=your-api-key -
Update documentation
- Add server details to this README
- Update
PROJECT_INSTRUCTIONS.example.md - Update
CLAUDE.mdif adding new patterns or capabilities - Add security notes if applicable
-
Test thoroughly
- Test with real infrastructure
- Verify error handling
- Check for sensitive data leaks
- Review security implications
Environment Variables
All MCP servers support two configuration methods:
1. Environment Variables (.env file)
- Simple key=value pairs
- Loaded automatically by each MCP server
- Good for simple setups or testing
2. Ansible Inventory (recommended for production)
- Centralized infrastructure definition
- Supports complex host groupings
- Better for multi-host environments
- Set
ANSIBLE_INVENTORY_PATHin.env
Coding Standards
- Python 3.10+ syntax and features
- Async/await for all I/O operations
- Type hints where beneficial
- Error handling for network operations
- Logging to stderr for debugging
- Security: Validate inputs, sanitize outputs
Testing Checklist
Before committing changes:
- Security git hook installed (
python helpers/install_git_hook.py) - Manual security check passes (
python helpers/pre_publish_check.py) - No sensitive data in code or commits
- Environment variables for all configuration
- Error handling for network failures
- Logging doesn't expose secrets
- Documentation updated
- Security implications reviewed
-
.gitignoreupdated if needed
š Troubleshooting
MCP Servers Not Appearing in Claude
-
Check Claude Desktop config:
# Windows type %APPDATA%\Claude\claude_desktop_config.json # Mac/Linux cat ~/.config/Claude/claude_desktop_config.json -
Verify Python path is correct in config
-
Restart Claude Desktop completely
-
Check logs - MCP servers log to stderr
Connection Errors
Docker/Podman API:
# Test connectivity
curl http://your-host:2375/containers/json
# Check firewall
netstat -an | grep 2375
Pi-hole API:
# Test API key
curl "http://your-pihole/api/stats/summary?sid=YOUR_API_KEY"
Ollama:
# Test Ollama endpoint
curl http://your-host:11434/api/tags
Import Errors
If you get Python import errors:
# Reinstall dependencies
pip install --upgrade -r requirements.txt
# Verify MCP installation
pip show mcp
Permission Errors
On Linux/Mac:
# Fix .env permissions
chmod 600 .env
# Make scripts executable
chmod +x *.py
š Additional Resources
MCP Protocol
Related Projects
š License
MIT License - See LICENSE file for details
Copyright (c) 2025 Barnaby Jeans
š¤ Contributing
Contributions are welcome! Please see CONTRIBUTING.md for detailed guidelines.
For AI Assistants & Developers
š Read CLAUDE.md first - This file contains:
- Complete project architecture and development patterns
- Security requirements and common pitfalls to avoid
- Specific workflows for adding features and fixing bugs
- AI assistant-specific guidance for working with this codebase
Quick Start for Contributors
- Install security git hook (
python helpers/install_git_hook.py) - Review security guidelines in SECURITY.md
- No sensitive data in commits (hook will block automatically)
- All configuration uses environment variables or Ansible
- Update documentation for any changes
- Test thoroughly with real infrastructure
Pull Request Process
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Test with your homelab setup
- Update README and other docs as needed
- Commit with clear messages (
git commit -m 'Add amazing feature') - Push to your fork (
git push origin feature/amazing-feature) - Open a Pull Request
Code Review Criteria
- Security best practices followed
- No hardcoded credentials or IPs
- Proper error handling
- Code follows existing patterns
- Documentation is clear and complete
- Changes are tested
š Acknowledgments
- Anthropic for Claude and MCP
- The homelab community for inspiration
- Contributors and testers
š Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Security: See SECURITY.md for reporting vulnerabilities
Remember: This project handles critical infrastructure. Always prioritize security and test changes in a safe environment first!
Related Servers
Claude Desktop
Integrates Amoga Studio with Claude Desktop for enhanced productivity and communication.
Windows Control
Programmatic control over Windows system operations including mouse, keyboard, window management, and screen capture using nut.js.
SoftCroft Doc Server MCP
Manages BookStack documentation for the SoftCroft multi-agent system, aiding in Sage 200 to Odoo 17 migration.
ActivityWatch MCP Server (Swift)
Provides structured access to ActivityWatch time tracking data for AI assistants.
Shortcuts
Access and run Apple Shortcuts. Allows AI assistants to list, view, and execute your shortcuts.
OpenHeidelberg
Fetches and merges iCal calendar entries from various sources.
ClickUp
An MCP server for ClickUp, allowing AI assistants to interact with and manage tasks within your ClickUp workspaces.
MCP CSV Analysis with Gemini AI
Perform advanced CSV analysis and generate insights using Google's Gemini AI. Requires Gemini and Plotly API keys.
Umami Analytics
Access website analytics data from your Umami instance.
WeRead
Access your WeChat Reading (微俔读书) bookshelf, notes, highlights, and reviews.