Blender AI MCP
Modular MCP Server + Blender Addon for AI-Driven 3D Modeling.
blender-ai-mcp
A production-shaped MCP server for Blender.
blender-ai-mcp lets Claude, ChatGPT, Codex, and other MCP clients control Blender through a stable tool API instead of ad-hoc Python generation. The result is a safer, smaller, and more reliable surface for real modeling work: goal-first routing, curated public tools, deterministic inspection, and verification that does not depend on guesswork.
Why This Exists
Most "AI + Blender" setups still ask the model to write raw bpy scripts. That breaks exactly where production work gets interesting:
- Blender APIs drift across versions.
- Context-sensitive operators fail when the active object, mode, or selection is wrong.
- Raw scripts give weak feedback when something goes wrong.
- Vision can describe a result, but it cannot be trusted as the final authority.
blender-ai-mcp takes the opposite approach: treat Blender control as a product surface, not a code-generation stunt.
Why This MCP Server Instead of Raw Python
- Stable contracts over script synthesis. The model calls tools with validated parameters instead of improvising Blender code.
- Goal-first orchestration. Normal guided sessions start from
router_set_goal(...), so the system knows what the model is trying to build before it starts calling low-level actions. - Small public surface. The default
llm-guidedprofile exposes a tiny, search-first bootstrap layer instead of flooding the model with the whole runtime inventory. - Truth-first verification. Inspection, measurement, and assertion tools determine what is actually true in Blender.
- Safe execution boundaries. The Blender addon executes operations on Blender's main thread while the MCP server handles routing, validation, discovery, and structured responses.
The Product Approach
The business idea formalized in TASK-113 is simple:
- Atomic tools are the implementation substrate. They stay small, precise, and mostly hidden from the normal public surface.
- Macro tools are the preferred LLM-facing layer for meaningful task-sized work.
- Workflow tools are bounded multi-step process tools with explicit reporting, not open-ended "do anything" endpoints.
- Goal-first orchestration keeps sessions anchored to an active intent instead of making the model rediscover context on every turn.
- Vision assists interpretation, while deterministic measurement and assertions provide the final truth layer.
- Pluggable vision runtimes now cover local MLX plus external OpenRouter and Google AI Studio / Gemini provider paths behind the same bounded contract.
This is what turns the project from "Blender tools exposed over MCP" into a usable AI control product for modeling pipelines.
LLM-Guided Public Surface
llm-guided is the default production-oriented surface. It is intentionally small, search-first, and designed around goal-aware sessions.
Normal guided flow:
router_set_goal(...)browse_workflows,search_tools, orcall_tool- use grouped/public tools such as
check_scene,inspect_scene, orconfigure_scene - verify with inspection plus
scene_measure_*andscene_assert_*
When a bounded modeling intent matches, the default public working layer should be the macro layer:
macro_cutout_recessfor recesses, openings, and cutter-driven cutoutsmacro_relative_layoutfor align/place/contact-gap part layoutmacro_attach_part_to_surfacefor seating one part onto another object's surface/bodymacro_align_part_with_contactfor minimal repair nudges on pairs that almost fitmacro_place_symmetry_pairfor mirrored pair placement/correction around an explicit mirror planemacro_place_supported_pairfor mirrored pair placement/correction against one shared support surfacemacro_cleanup_part_intersectionsfor bounded pairwise overlap cleanup without free-form collision solvingmacro_adjust_relative_proportionfor bounded ratio repair between related objectsmacro_adjust_segment_chain_arcfor bounded arc adjustment on ordered segment chainsmacro_finish_formfor preset-driven bevel/subdivision/solidify finishingreference_imagesfor goal-scoped reference intake before bounded visual comparisonguided_reference_readinessonrouter_set_goal,router_get_status, and staged reference compare/iterate payloads so clients can see whether reference-driven stage work is actually readyreference_compare_stage_checkpointfor deterministic multi-view stage comparison against attached references during manual iterative workreference_iterate_stage_checkpointfor a session-aware staged correction loop that remembers prior focus, can escalate into inspect/validate when the same correction repeats, and can now target one object, many objects, a collection, or the full assembled silhouette
Current guided bootstrap surface:
router_set_goalrouter_get_statusbrowse_workflowsreference_imagessearch_toolscall_toollist_promptsget_prompt
Current guided utility prep path:
- bootstrap/planning search can now reach:
scene_get_viewportscene_clean_scene
- these utility actions stay bounded and do not reopen the full legacy surface
- the canonical cleanup argument shape on
llm-guidediskeep_lights_and_cameras; older split flags are compatibility-only and should not be used as the documented public form - build goals should still start from
router_set_goal(...), but screenshot / viewport / scene-reset requests should use the guided utility path instead
Current public aliases on llm-guided:
| Internal tool | llm-guided public name | Public arg changes |
|---|---|---|
scene_context | check_scene | action -> query |
scene_inspect | inspect_scene | object_name -> target_object |
scene_configure | configure_scene | settings -> config |
workflow_catalog | browse_workflows | workflow_name -> name, query -> search_query |
Why that matters:
- the guided profile starts from 8 visible tools instead of the full catalog
- grouped/public tools stay easy to discover
- hidden atomic tools remain available as infrastructure, not as the default public mental model
- specialist families stay out of the normal guided entry layer until the macro surface is broader
Atomic Foundations And Docs
The root README.md is intentionally not the full tool catalog anymore.
The detailed tool inventory and atomic family docs should stay in docs, not on the front page. That is the right long-term structure after TASK-113.
Use these docs depending on what you need:
- Tool Layering Policy
- Canonical policy for
atomic / macro / workflow, hidden atomic tools, goal-first usage, and vision/assert boundaries.
- Canonical policy for
- MCP Server Docs
- Surface profiles, guided aliases, versioned contracts, and runtime/platform guidance.
- MCP Client Config Examples
- Ready-to-paste local MCP client config examples for guided/manual surfaces plus MLX, OpenRouter, and Gemini vision variants.
- Vision Layer Docs
- Runtime/backends, capture bundles, reference images, macro/workflow vision integration notes, and repo-tracked real viewport eval bundles for both direct user-view and fixed camera-perspective captures.
- Available Tools Summary
- Full inventory and grouped/public tool overview.
- Tool Architecture Index
- Maintainer-facing map of the tool families underneath the MCP surface.
If you want to see the atomic families the server is built on, start here:
Recommended interpretation:
- keep
/_docs/TOOLS/as the maintainer-facing atomic/grouped architecture map - keep
README.mdproduct-facing and compact - keep
/_docs/AVAILABLE_TOOLS_SUMMARY.mdas the runtime inventory
Provider Notes
Current short version:
- Local default:
mlx_localwith a Qwen VL 4B-class model path; current repo-validated baseline ismlx-community/Qwen3-VL-4B-Instruct-4bit - External iterative compare candidate: OpenRouter with
x-ai/grok-4.20-multi-agent - External Gemini compare path: Google AI Studio / Gemini now uses a provider-specific narrow compare contract for staged iterative/reference-guided flows
Detailed per-provider table:
Architecture
The system is split on purpose:
- MCP server (
server/): FastMCP surface, public tool definitions, transforms, discovery, and response contracts. - Router (
server/router/): goal interpretation, safety/correction policy, workflow matching, session context, and guided execution behavior. - Blender addon (
blender_addon/): actualbpyexecution, RPC handlers, and Blender main-thread-safe operation scheduling.
Communication happens through JSON-RPC over TCP sockets.
More detail:
Structured Contract Baseline
The server is moving critical surfaces toward machine-readable payloads instead of prose-heavy JSON strings.
Current structured-contract baseline includes:
macro_cutout_recessmacro_finish_formmacro_attach_part_to_surfacemacro_align_part_with_contactmacro_place_supported_pairmacro_cleanup_part_intersectionsmacro_relative_layoutscene_createscene_configuremesh_selectmesh_select_targetedmesh_inspectscene_snapshot_statescene_compare_snapshotscene_measure_distancescene_measure_dimensionsscene_measure_gapscene_measure_alignmentscene_measure_overlapscene_assert_contactscene_assert_dimensionsscene_assert_containmentscene_assert_symmetryscene_assert_proportionrouter_set_goalrouter_get_statusworkflow_catalog
That is important for automation, auditing, and future macro/workflow composition.
Contact Truth Semantics
For contact-sensitive checks on curved or rounded forms, the truth layer now distinguishes:
- mesh-surface contact/gap semantics when a bounded mesh-aware path is available
- bbox fallback semantics when a mesh-aware path is not available
That means a pair can still show bbox contact while the main measured relation
remains separated if the real mesh surfaces still have a visible gap. Guided
hybrid truth follow-up now carries that distinction forward in operator-facing
summaries instead of collapsing it into a generic "contact passed/failed"
claim.
When the mesh-aware path finds a real overlap, the main measured relation also
stays overlapping, so overlap rejection in scene_assert_contact(...) still
works as a separate truth condition instead of collapsing into plain contact.
Structured Clarification Flow
The guided surface supports missing-input handling as part of the product contract, not as an afterthought.
- Model-first clarification is the default for
router_set_goal(...)onllm-guided: missing workflow parameters return a typedneeds_inputpayload to the outer model first. - Typed fallback payloads keep the same flow usable on tool-only or compatibility clients.
- Human/native clarification is reserved for later/fallback policy rather than the default first step of workflow execution.
router_set_goal(...)can ask for constrained choices, booleans, enums, or workflow confirmation.partial answerssurvive across follow-up turns.workflow_catalogimport conflicts reuse the same clarification model.
Guided Handoff Contract
The guided surface now treats workflow fallback as an explicit typed contract instead of a phase side effect hidden in prose.
router_set_goal(...)returnsguided_handoffon bounded continuation paths such ascontinuation_mode="guided_manual_build"andcontinuation_mode="guided_utility".guided_handoffnames thetarget_phase,direct_tools,supporting_tools, anddiscovery_toolsfor the next step onllm-guided.workflow_import_recommendedstaysFalseon these fallback paths unless the user explicitly asks for workflow import/create behavior.router_get_status(...)preserves the activeguided_handoffin session diagnostics so clients can recover the intended continuation path.
Guided Reference Readiness
Reference-driven staged work now has one explicit readiness contract instead of hidden ordering assumptions.
router_set_goal(...)androuter_get_status(...)exposeguided_reference_readiness.- the payload reports
attached_reference_count,pending_reference_count,compare_ready,iterate_ready, plus machine-readableblocking_reasonandnext_action reference_images(action="attach", ...)can stay pending until the guided goal session is actually ready, then adopt automatically- if the same goal already has active refs and new ones are staged during
needs_input, the staged refs stay separate from the already-active goal references until readiness returns - if a ready session still carries explicit pending refs for another goal,
reference_images(action="list"| "remove"| "clear", ...)now treats that merged visible set consistently instead of leaving broken pending records reference_compare_stage_checkpoint(...)andreference_iterate_stage_checkpoint(...)now fail fast when the session is not ready, and echo the sameguided_reference_readinesspayload- for staged compare/iterate,
goal_overrideis no longer a session substitute; use an active guided goal session instead
Session Diagnostics
Guided/runtime payloads now expose explicit MCP session metadata:
router_set_goal(...)includessession_idandtransportrouter_get_status(...)includessession_idandtransportreference_compare_stage_checkpoint(...)includessession_idandtransportreference_iterate_stage_checkpoint(...)includessession_idandtransport
Current runtime guidance:
- stateful
streamableHTTP is the recommended transport for longer guided runs and for debugging session-aware reference / checkpoint flows - recent guided-session hardening removed the known router bookkeeping path that could clobber active goal/reference session state during routed tool execution
- if you investigate a future state-loss incident, compare
session_idandtransportfirst to distinguish:- transport/session reconnects
- application-level goal resets
- normal guided readiness blockers such as missing goal or references
Server-Side Sampling Assistants Baseline
The MCP server now has a bounded analytical assistant layer inside an active request.
Current use cases:
- optional
assistant_summaryon inspection-heavy paths such asscene_snapshot_state,scene_compare_snapshot,scene_get_hierarchy,scene_get_bounding_box, andscene_get_origin_info - bounded
repair_suggestiononrouter_set_goal,router_get_status, andworkflow_catalog
Explicit assistant terminal states:
successunavailablemasked_errorrejected_by_policy
The rule is strict: assistants may help summarize or suggest, but they do not override scene truth or router policy.
Versioned Surface Baseline
Public surface evolution is versioned explicitly:
| Surface profile | Default contract line |
|---|---|
legacy-manual | legacy-v1 |
legacy-flat | legacy-v1 |
llm-guided | llm-guided-v2 |
Compatibility note:
llm-guided-v1remains selectable as a rollback lineworkflow_catalog,scene_context, andscene_inspectparticipate in the guided surface evolution story
Code Mode Decision
Current benchmark baselines:
legacy-flatllm-guidedcode-mode-pilot
Current decision:
- Go decision: keep
code-mode-pilotas an experimental read-only surface - Do not make Code Mode the default path for write-heavy or geometry-destructive Blender work
Support Matrix
- Blender: tested on Blender 5.0 in E2E coverage; addon minimum remains Blender 4.0+ on a best-effort basis.
- Python: 3.11+
- FastMCP task runtime: fastmcp 3.1.1 + pydocket 0.18.2
- OS: macOS / Windows / Linux
- Memory: router semantic features rely on a local LaBSE model and related vector infrastructure
Quick Start
1. Install the Blender addon
- Download
blender_ai_mcp.zipfrom the Releases page or build it locally withpython scripts/build_addon.py. - Open Blender -> Edit -> Preferences -> Add-ons.
- Click Install... and select the zip file.
- Enable the addon. It starts the local Blender RPC server on port
8765.
2. Run the MCP server on the guided profile
Recommended defaults:
ROUTER_ENABLED=trueMCP_SURFACE_PROFILE=llm-guided- map
/tmpif you want host-visible image/file outputs
Example Docker command:
docker run -i --rm \
-v /tmp:/tmp \
-e BLENDER_AI_TMP_INTERNAL_DIR=/tmp \
-e BLENDER_AI_TMP_EXTERNAL_DIR=/tmp \
-e ROUTER_ENABLED=true \
-e MCP_SURFACE_PROFILE=llm-guided \
-e BLENDER_RPC_HOST=host.docker.internal \
ghcr.io/patrykiti/blender-ai-mcp:latest
docker run --rm \
-p 8000:8000 \
-v /tmp:/tmp \
-e BLENDER_AI_TMP_INTERNAL_DIR=/tmp \
-e BLENDER_AI_TMP_EXTERNAL_DIR=/tmp \
-e ROUTER_ENABLED=true \
-e MCP_SURFACE_PROFILE=llm-guided \
-e MCP_TRANSPORT_MODE=streamable \
-e MCP_HTTP_HOST=0.0.0.0 \
-e MCP_HTTP_PORT=8000 \
-e MCP_STREAMABLE_HTTP_PATH=/mcp \
-e BLENDER_RPC_HOST=host.docker.internal \
ghcr.io/patrykiti/blender-ai-mcp:latest
Example generic MCP client config:
{
"mcpServers": {
"blender-ai-mcp": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-v", "/tmp:/tmp",
"-e", "BLENDER_AI_TMP_INTERNAL_DIR=/tmp",
"-e", "BLENDER_AI_TMP_EXTERNAL_DIR=/tmp",
"-e", "ROUTER_ENABLED=true",
"-e", "MCP_SURFACE_PROFILE=llm-guided",
"-e", "BLENDER_RPC_HOST=host.docker.internal",
"ghcr.io/patrykiti/blender-ai-mcp:latest"
]
}
}
}
Network notes:
- macOS / Windows: use
host.docker.internal - Linux: prefer
--network hostwithBLENDER_RPC_HOST=127.0.0.1 MCP_TRANSPORT_MODE=stdiokeeps the current subprocess/stdio MCP modeMCP_TRANSPORT_MODE=streamablestarts a stateful Streamable HTTP MCP server
For broader profile/config examples, use:
- MCP Server Docs
- MCP Client Config Examples
.env.examplefor the full tracked runtime/config variable set
Testing
Unit tests:
PYTHONPATH=. poetry run pytest tests/unit/ -v
Unit collection count:
poetry run pytest tests/unit --collect-only
E2E tests:
python3 scripts/run_e2e_tests.py
E2E collection count:
poetry run pytest tests/e2e --collect-only
Pre-commit:
poetry run pre-commit install --hook-type pre-commit --hook-type pre-push
poetry run pre-commit run --all-files
More detail:
Documentation Map
- Architecture
- MCP Server Docs
- Router Docs
- Router Responsibility Boundaries
- Addon Docs
- Available Tools Summary
- Tool Architecture Index
- Prompts
- Tasks
Contributing
Read CONTRIBUTING.md before opening a PR. The repo enforces Clean Architecture boundaries, typed Python, router metadata rules, and pre-commit validation.
Community And Support
If blender-ai-mcp is useful in your workflow, consider sponsoring its long-term development.
Sponsorship helps fund maintenance, docs, testing, and the higher-level reliability work that makes this repo different from raw Blender code generation: goal-first routing, curated tools, deterministic verification, and production-shaped workflow support.
Author
Patryk Ciechański
- GitHub: PatrykIti
License
This project is licensed under the Apache License 2.0.
See:
Related Servers
PDF Tools
A server for manipulating PDF files, including merging, page extraction, and searching.
Koko Credit card assistant
MCP server to access Koko's credit card workflows to search, compare and optimize credit cards
Umami Analytics
Access website analytics data from your Umami instance.
Rememberizer
Interact with Rememberizer's document and knowledge management API to search, retrieve, and manage documents.
Gatherings
A server for managing gatherings and sharing expenses.
GSuite
Interact with Google products, including Gmail and Calendar.
OpenTabs
Plugin-based MCP server that gives AI agents access to web applications through the user's authenticated browser session. Chrome extension with 100+ plugins for Slack, Discord, GitHub, Reddit, and more.
Todoist
Integrates with the Todoist API to manage your tasks and projects.
IWE
Knowledge graph MCP server for searching, reading, and refactoring hierarchical markdown documents
Wiki.js
Integrates with Wiki.js, enabling AI to read and update documentation.