ComputerVision-based πͺ sorcery of image recognition and editing tools for AI assistants.
ComputerVision-based πͺ sorcery of image recognition and editing tools for AI assistants
AI assistants are limited when working with images:
πͺ ImageSorcery
empowers AI assistants with powerful image processing capabilities:
Just ask your AI to help with image tasks:
"copy photos with pets from frolder
photos
to folderpets
"
"Find a cat at the photo.jpg and crop the image in a half in height and width to make the cat be centerized"
π Hint: Use full path to your files".
"Numerate form fields on this
form.jpg
withfoduucom/web-form-ui-field-detection
model and fill theform.md
with a list of described fields"π Hint: Specify the model and the confidence".
π Hint: Add "use imagesorcery" to make sure it will uses propper tool".
Your tool will combine multiple tools listed below to achieve your goal.
Tool | Description | Example Prompt |
---|---|---|
blur | Blurs specified areas of an image using OpenCV | "Blur the area from (150, 100) to (250, 200) with a blur strength of 21 in my image 'test_image.png' and save it as 'output.png'" |
crop | Crops an image using OpenCV's NumPy slicing approach | "Crop my image 'input.png' from coordinates (10,10) to (200,200) and save it as 'cropped.png'" |
detect | Detects objects in an image using models from Ultralytics | "Detect objects in my image 'photo.jpg' with a confidence threshold of 0.4" |
draw_rectangles | Draws rectangles on an image using OpenCV | "Draw a red rectangle from (50,50) to (150,100) and a filled blue rectangle from (200,150) to (300,250) on my image 'photo.jpg'" |
draw_texts | Draws text on an image using OpenCV | "Add text 'Hello World' at position (50,50) and 'Copyright 2023' at the bottom right corner of my image 'photo.jpg'" |
find | Finds objects in an image based on a text description | "Find all dogs in my image 'photo.jpg' with a confidence threshold of 0.4" |
get_metainfo | Gets metadata information about an image file | "Get metadata information about my image 'photo.jpg'" |
get_models | Lists all available models in the models directory | "List all available models in the models directory" |
ocr | Performs Optical Character Recognition (OCR) on an image using EasyOCR | "Extract text from my image 'document.jpg' using OCR with English language" |
resize | Resizes an image using OpenCV | "Resize my image 'photo.jpg' to 800x600 pixels and save it as 'resized_photo.jpg'" |
rotate | Rotates an image using imutils.rotate_bound function | "Rotate my image 'photo.jpg' by 45 degrees and save it as 'rotated_photo.jpg'" |
π Hint: detailed information and usage instructions for each tool can be found in the tool's /src/imagesorcery_mcp/tools/README.md
.
Python 3.10
or higherClaude.app
, Cline
, or another MCP client1. Load MCP documentation if you have the tool for it. It will give you path to *MCP-configuration file* and other needed instructions.
2. Run command `python -m venv imagesorcery-mcp`
3. Run command `source imagesorcery-mcp/bin/activate`
For windows bash run command `source imagesorcery-mcp/Scripts/activate`
For windows cmd run command `imagesorcery-mcp\Scripts\activate`
4. Run command `pip install imagesorcery-mcp`
5. Run command `imagesorcery-mcp --post-install`
6. Create or modify *MCP-configuration file*.
7. Add the `imagesorcery-mcp` server configuration to the `mcpServers` object in the *MCP-configuration file*
`"imagesorcery-mcp": {"command": "/full/path/to/venv/bin/imagesorcery-mcp","timeout": 100}`
8. Get available models using `get_models` tool from `imagesorcery-mcp`
9. Attempt completion, indicating that the installation and configuration are complete.
Create and activate a virtual environment (Strongly Recommended):
For reliable installation of all components, especially the clip
package (installed via the post-install script), it is strongly recommended to use Python's built-in venv
module instead of uv venv
.
python -m venv imagesorcery-mcp
source imagesorcery-mcp/bin/activate # For Linux/macOS
# source imagesorcery-mcp\Scripts\activate # For Windows
Install the package into the activated virtual environment:
You can use pip
or uv pip
.
pip install imagesorcery-mcp
# OR, if you prefer using uv for installation into the venv:
# uv pip install imagesorcery-mcp
Run the post-installation script:
This step is crucial. It downloads the required models and attempts to install the clip
Python package from GitHub into the active virtual environment.
imagesorcery-mcp --post-install
models
directory (usually within the site-packages directory of your virtual environment, or a user-specific location if installed globally) to store pre-trained models.models/model_descriptions.json
file there.yoloe-11l-seg-pf.pt
, yoloe-11s-seg-pf.pt
, yoloe-11l-seg.pt
, yoloe-11s-seg.pt
) required by the detect
tool into this models
directory.clip
Python package from Ultralytics' GitHub repository directly into the active Python environment. This is required for text prompt functionality in the find
tool.find
tool into the models
directory.You can run this process anytime to restore the default models and attempt clip
installation.
Using uv venv
to create virtual environments:
Based on testing, virtual environments created with uv venv
may not include pip
in a way that allows the imagesorcery-mcp --post-install
script to automatically install the clip
package from GitHub (it might result in a "No module named pip" error during the clip
installation step).
If you choose to use uv venv
:
uv venv
.imagesorcery-mcp
: uv pip install imagesorcery-mcp
.clip
package into your active uv venv
:
uv pip install git+https://github.com/ultralytics/CLIP.git
imagesorcery-mcp --post-install
. This will download models but may fail to install the clip
Python package.
For a smoother automated clip
installation via the post-install script, using python -m venv
(as described in step 1 above) is the recommended method for creating the virtual environment.Using uvx imagesorcery-mcp --post-install
:
Running the post-installation script directly with uvx
(e.g., uvx imagesorcery-mcp --post-install
) will likely fail to install the clip
Python package. This is because the temporary environment created by uvx
typically does not have pip
available in a way the script can use. Models will be downloaded, but the clip
package won't be installed by this command.
If you intend to use uvx
to run the main imagesorcery-mcp
server and require clip
functionality, you'll need to ensure the clip
package is installed in an accessible Python environment that uvx
can find, or consider installing imagesorcery-mcp
into a persistent environment created with python -m venv
.
Add to your MCP client these settings.
If imagesorcery-mcp
is in your system's PATH after installation, you can use imagesorcery-mcp
directly as the command. Otherwise, you'll need to provide the full path to the executable.
"mcpServers": {
"imagesorcery-mcp": {
"command": "imagesorcery-mcp", // Or /full/path/to/venv/bin/imagesorcery-mcp if installed in a venv
"transportType": "stdio",
"autoApprove": ["detect", "crop", "get_models", "draw_texts", "get_metainfo", "rotate", "resize", "classify", "draw_rectangles", "find", "ocr"],
"timeout": 100
}
}
"mcpServers": {
"imagesorcery-mcp": {
"url": "http://127.0.0.1:8000/mcp", // Use your custom host, port, and path if specified
"transportType": "http",
"autoApprove": ["detect", "crop", "get_models", "draw_texts", "get_metainfo", "rotate", "resize", "classify", "draw_rectangles", "find", "ocr"],
"timeout": 100
}
}
"mcpServers": {
"imagesorcery-mcp": {
"command": "imagesorcery-mcp.exe", // Or C:\\full\\path\\to\\venv\\Scripts\\imagesorcery-mcp.exe if installed in a venv
"transportType": "stdio",
"autoApprove": ["detect", "crop", "get_models", "draw_texts", "get_metainfo", "rotate", "resize", "classify", "draw_rectangles", "find", "ocr"],
"timeout": 100
}
}
Some tools require specific models to be available in the models
directory:
# Download models for the detect tool
download-yolo-models --ultralytics yoloe-11l-seg
download-yolo-models --huggingface ultralytics/yolov8:yolov8m.pt
When downloading models, the script automatically updates the models/model_descriptions.json
file:
For Ultralytics models: Descriptions are predefined in src/imagesorcery_mcp/scripts/create_model_descriptions.py
and include detailed information about each model's purpose, size, and characteristics.
For Hugging Face models: Descriptions are automatically extracted from the model card on Hugging Face Hub. The script attempts to use the model name from the model index or the first line of the description.
After downloading models, it's recommended to check the descriptions in models/model_descriptions.json
and adjust them if needed to provide more accurate or detailed information about the models' capabilities and use cases.
ImageSorcery MCP server can be run in different modes:
STDIO
- defaultStreamable HTTP
- for web-based deploymentsServer-Sent Events (SSE)
- for web-based deployments that rely on SSESTDIO Mode (Default) - This is the standard mode for local MCP clients:
imagesorcery-mcp
Streamable HTTP Mode - For web-based deployments:
imagesorcery-mcp --transport=streamable-http
With custom host, port, and path:
imagesorcery-mcp --transport=streamable-http --host=0.0.0.0 --port=4200 --path=/custom-path
Available transport options:
--transport
: Choose between "stdio" (default), "streamable-http", or "sse"--host
: Specify host for HTTP-based transports (default: 127.0.0.1)--port
: Specify port for HTTP-based transports (default: 8000)--path
: Specify endpoint path for HTTP-based transports (default: /mcp)This repository is organized as follows:
.
βββ .gitignore # Specifies intentionally untracked files that Git should ignore.
βββ pyproject.toml # Configuration file for Python projects, including build system, dependencies, and tool settings.
βββ pytest.ini # Configuration file for the pytest testing framework.
βββ README.md # The main documentation file for the project.
βββ setup.sh # A shell script for quick setup (legacy, for reference or local use).
βββ models/ # This directory stores pre-trained models used by tools like `detect` and `find`. It is typically ignored by Git due to the large file sizes.
β βββ model_descriptions.json # Contains descriptions of the available models.
β βββ settings.json # Contains settings related to model management and training runs.
β βββ *.pt # Pre-trained model.
βββ src/ # Contains the source code for the πͺ ImageSorcery MCP server.
β βββ imagesorcery_mcp/ # The main package directory for the server.
β βββ __init__.py # Makes `imagesorcery_mcp` a Python package.
β βββ __main__.py # Entry point for running the package as a script.
β βββ logging_config.py # Configures the logging for the server.
β βββ server.py # The main server file, responsible for initializing FastMCP and registering tools.
β βββ logs/ # Directory for storing server logs.
β βββ scripts/ # Contains utility scripts for model management.
β β βββ README.md # Documentation for the scripts.
β β βββ __init__.py # Makes `scripts` a Python package.
β β βββ create_model_descriptions.py # Script to generate model descriptions.
β β βββ download_clip.py # Script to download CLIP models.
β β βββ post_install.py # Script to run post-installation tasks.
β β βββ download_models.py # Script to download other models (e.g., YOLO).
β βββ tools/ # Contains the implementation of individual MCP tools.
β βββ README.md # Documentation for the tools.
β βββ __init__.py # Import the central logger
β βββ *.py # Implements the tool.
βββ tests/ # Contains test files for the project.
βββ test_server.py # Tests for the main server functionality.
βββ data/ # Contains test data, likely image files used in tests.
βββ tools/ # Contains tests for individual tools.
git clone https://github.com/sunriseapps/imagesorcery-mcp.git # Or your fork
cd imagesorcery-mcp
python -m venv venv
source venv/bin/activate # For Linux/macOS
# venv\Scripts\activate # For Windows
pip install -e ".[dev]"
This will install imagesorcery-mcp
and all dependencies from [project.dependencies]
and [project.optional-dependencies].dev
(including build
and twine
).
These rules apply to all contributors: humans and AI.
Read all the README.md
files in the project. Understand the project structure and purpose. Understand the guidelines for contributing. Think through how it's relate to you task, and how to make changes accordingly.
Read pyproject.toml
.
Make attention to sections: [tool.ruff]
, [tool.ruff.lint]
, [project.optional-dependencies]
and [project]dependencies
.
Strictly follow code style defined in pyproject.toml
.
Stick to the stack defined in pyproject.toml
dependencies and do not add any new dependencies without a good reason.
Write your code in new and existing files.
If new dependencies needed, update pyproject.toml
and install them via pip install -e .
or pip install -e ".[dev]"
. Do not install them diirectly via pip install
.
Check out exixisting source codes for examples (e.g. src/imagesorcery_mcp/server.py
, src/imagesorcery_mcp/tools/crop.py
). Stick to the code style, naming conventions, input and outpput data formats, codeode structure, arcchitecture, etc. of the existing code.
Update related README.md
files with your changes.
Stick to the format and structure of the existing README.md
files.
Write tests for your code.
Check out existing tests for examples (e.g. tests/test_server.py
, tests/tools/test_crop.py
).
Stick to the code style, naming conventions, input and outpput data formats, codeode structure, arcchitecture, etc. of the existing tests.
Run tests and linter to ensure everything works:
pytest
ruff check .
In case of fails - fix the code and tests. It is strictly required to have all new code to comply with the linter rules and pass all tests.
If you have any questions, issues, or suggestions regarding this project, feel free to reach out to:
You can also open an issue in the repository for bug reports or feature requests.
This project is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License.
GitLab API, enabling project management
Create crafted UI components inspired by the best 21st.dev design engineers.
ALAPI MCP Tools,Call hundreds of API interfaces via MCP
APIMatic MCP Server is used to validate OpenAPI specifications using APIMatic. The server processes OpenAPI files and returns validation summaries by leveraging APIMaticβs API.
MCP to interface with multiple blockchains, staking, DeFi, swap, bridging, wallet management, DCA, Limit Orders, Coin Lookup, Tracking and more.
Enable AI agents to interact with the Atla API for state-of-the-art LLMJ evaluation.
Get prescriptive CDK advice, explain CDK Nag rules, check suppressions, generate Bedrock Agent schemas, and discover AWS Solutions Constructs patterns.
Query and analyze your Axiom logs, traces, and all other event data in natural language
Bring the full power of BrowserStackβs Test Platform to your AI tools, making testing faster and easier for every developer and tester on your team.
Flag features, manage company data, and control feature access using Bucket