SensorMCP Server
Automate dataset creation and train custom object detection models using natural language.
SensorMCP Server
A SensorMCP Model Context Protocol (MCP) Server that enables automated dataset creation and custom object detection model training through natural language interactions. This project integrates computer vision capabilities with Large Language Models using the MCP standard.
š About
SensorMCP Server combines the power of foundation models (like GroundedSAM) with custom model training (YOLOv8) to create a seamless workflow for object detection. Using the Model Context Protocol, it enables LLMs to:
- Automatically label images using foundation models
- Create custom object detection datasets
- Train specialized detection models
- Download images from Unsplash for training data
[!NOTE] The Model Context Protocol (MCP) enables seamless integration between LLMs and external tools, making this ideal for AI-powered computer vision workflows.
⨠Features
- Foundation Model Integration: Uses GroundedSAM for automatic image labeling
- Custom Model Training: Fine-tune YOLOv8 models on your specific objects
- Image Data Management: Download images from Unsplash or import local images
- Ontology Definition: Define custom object classes through natural language
- MCP Protocol: Native integration with LLM workflows and chat interfaces
- Fixed Data Structure: Organized directory layout for reproducible workflows
š ļø Installation
Prerequisites
- uv for package management
- Python 3.13+ (uv python install 3.13)
- CUDA-compatible GPU (recommended for training)
Setup
- Clone the repository:
git clone <repository-url>
cd sensor-mcp
- Install dependencies:
uv sync
- Set up environment variables (create .envfile):
UNSPLASH_API_KEY=your_unsplash_api_key_here
š Usage
Running the MCP Server
For MCP integration (recommended):
uv run src/zoo_mcp.py
For standalone web server:
uv run src/server.py
MCP Configuration
Add to your MCP client configuration:
{
    "mcpServers": {
        "sensormcp-server": {
            "type": "stdio",
            "command": "uv",
            "args": [
                "--directory",
                "/path/to/sensor-mcp",
                "run",
                "src/zoo_mcp.py"
            ]
        }
    }
}
Available MCP Tools
- list_available_models() - View supported base and target models
- define_ontology(objects_list) - Define object classes to detect
- set_base_model(model_name) - Initialize foundation model for labeling
- set_target_model(model_name) - Initialize target model for training
- fetch_unsplash_images(query, max_images) - Download training images
- import_images_from_folder(folder_path) - Import local images
- label_images() - Auto-label images using the base model
- train_model(epochs, device) - Train custom detection model
Example Workflow
Through your MCP-enabled LLM interface:
- 
Define what to detect: Define ontology for "tiger, elephant, zebra"
- 
Set up models: Set base model to grounded_sam Set target model to yolov8n.pt
- 
Get training data: Fetch 50 images from Unsplash for "wildlife animals"
- 
Create dataset: Label all images using the base model
- 
Train custom model: Train model for 100 epochs on device 0
š Project Structure
sensor-mcp/
āāā src/
ā   āāā server.py          # Main MCP server implementation
ā   āāā zoo_mcp.py         # MCP entry point
ā   āāā models.py          # Model management and training
ā   āāā image_utils.py     # Image processing and Unsplash API
ā   āāā state.py           # Application state management
ā   āāā data/              # Created automatically
ā       āāā raw_images/    # Original/unlabeled images
ā       āāā labeled_images/# Auto-labeled datasets  
ā       āāā models/        # Trained model weights
āāā static/                # Web interface assets
āāā index.html            # Web interface template
š§ Supported Models
Base Models (for auto-labeling)
- GroundedSAM: Foundation model for object detection and segmentation
Target Models (for training)
- YOLOv8n.pt: Nano - fastest inference
- YOLOv8s.pt: Small - balanced speed/accuracy
- YOLOv8m.pt: Medium - higher accuracy
- YOLOv8l.pt: Large - high accuracy
- YOLOv8x.pt: Extra Large - highest accuracy
š API Integration
Unsplash API
To use image download functionality:
- Create an account at Unsplash Developers
- Create a new application
- Add your access key to the .envfile
š ļø Development
Running Tests
uv run pytest
Code Formatting
uv run black src/
š Requirements
See pyproject.toml for full dependency list. Key dependencies:
- mcp[cli]- Model Context Protocol
- autodistill- Foundation model integration
- torch&- torchvision- Deep learning framework
- ultralytics- YOLOv8 implementation
š¤ Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
š Citation
If you use this code or data in your research, please cite our paper:
@inproceedings{Guo2025,
  author = {Guo, Yunqi and Zhu, Guanyu and Liu, Kaiwei and Xing, Guoliang},
  title = {A Model Context Protocol Server for Custom Sensor Tool Creation},
  booktitle = {3rd International Workshop on Networked AI Systems (NetAISys '25)},
  year = {2025},
  month = {jun},
  address = {Anaheim, CA, USA},
  publisher = {ACM},
  doi = {10.1145/3711875.3736687},
  isbn = {979-8-4007-1453-5/25/06}
}
š License
This project is licensed under the MIT License.
š§ Contact
For questions about the zoo dataset mentioned in development: Email: yq@anysign.net
Related Servers
- Make- Execute make targets from any Makefile in a safe and controlled environment. 
- UI Prototype- A modern web application prototype built with React, TypeScript, and Material-UI, featuring authentication, internationalization, and Figma integration. 
- Memory Bank MCP- An AI-assisted development plugin that maintains persistent project context using structured markdown files for goals, decisions, and progress. 
- consult7- Analyze large codebases and document collections using high-context models via OpenRouter, OpenAI, or Google AI -- very useful, e.g., with Claude Code 
- Nuanced MCP Server- Provides call graph analysis for LLMs using the nuanced library. 
- Chainlink Feeds- Provides real-time access to Chainlink's decentralized on-chain price feeds. 
- Prefect- Manage and observe Prefect workflows through natural language. 
- Storyblok MCP Server- Manage your Storyblok CMS using natural language through AI tools. 
- MCP SSH Server- Securely execute remote commands and perform file operations over SSH, with support for both password and key-based authentication. 
- Thirdweb- Read/write to over 2k blockchains, enabling data querying, contract analysis/deployment, and transaction execution, powered by Thirdweb.