Blender AI MCP

Modular MCP Server + Blender Addon for AI-Driven 3D Modeling.

blender-ai-mcp

License: Apache 2.0 Python 3.11+ Docker CI Status GitHub Stars GitHub Sponsors

A production-shaped MCP server for Blender.

blender-ai-mcp lets Claude, ChatGPT, Codex, and other MCP clients control Blender through a stable tool API instead of ad-hoc Python generation. The result is a safer, smaller, and more reliable surface for real modeling work: goal-first routing, curated public tools, deterministic inspection, and verification that does not depend on guesswork.

Watch demo video on YouTube

Why This Exists

Most "AI + Blender" setups still ask the model to write raw bpy scripts. That breaks exactly where production work gets interesting:

  1. Blender APIs drift across versions.
  2. Context-sensitive operators fail when the active object, mode, or selection is wrong.
  3. Raw scripts give weak feedback when something goes wrong.
  4. Vision can describe a result, but it cannot be trusted as the final authority.

blender-ai-mcp takes the opposite approach: treat Blender control as a product surface, not a code-generation stunt.

Why This MCP Server Instead of Raw Python

  • Stable contracts over script synthesis. The model calls tools with validated parameters instead of improvising Blender code.
  • Goal-first orchestration. Normal guided sessions start from router_set_goal(...), so the system knows what the model is trying to build before it starts calling low-level actions.
  • Small public surface. The default llm-guided profile exposes a tiny, search-first bootstrap layer instead of flooding the model with the whole runtime inventory.
  • Truth-first verification. Inspection, measurement, and assertion tools determine what is actually true in Blender.
  • Safe execution boundaries. The Blender addon executes operations on Blender's main thread while the MCP server handles routing, validation, discovery, and structured responses.

The Product Approach

The business idea formalized in TASK-113 is simple:

  • Atomic tools are the implementation substrate. They stay small, precise, and mostly hidden from the normal public surface.
  • Macro tools are the preferred LLM-facing layer for meaningful task-sized work.
  • Workflow tools are bounded multi-step process tools with explicit reporting, not open-ended "do anything" endpoints.
  • Goal-first orchestration keeps sessions anchored to an active intent instead of making the model rediscover context on every turn.
  • Vision assists interpretation, while deterministic measurement and assertions provide the final truth layer.
  • Pluggable vision runtimes now cover local MLX plus external OpenRouter and Google AI Studio / Gemini provider paths behind the same bounded contract.

This is what turns the project from "Blender tools exposed over MCP" into a usable AI control product for modeling pipelines.

LLM-Guided Public Surface

llm-guided is the default production-oriented surface. It is intentionally small, search-first, and designed around goal-aware sessions.

Normal guided flow:

  1. router_set_goal(...)
  2. browse_workflows, search_tools, or call_tool
  3. use grouped/public tools such as check_scene, inspect_scene, or configure_scene
  4. verify with inspection plus scene_measure_* and scene_assert_*

When a bounded modeling intent matches, the default public working layer should be the macro layer:

  • macro_cutout_recess for recesses, openings, and cutter-driven cutouts
  • macro_relative_layout for align/place/contact-gap part layout
  • macro_attach_part_to_surface for seating one part onto another object's surface/body
  • macro_align_part_with_contact for minimal repair nudges on pairs that almost fit
  • macro_place_symmetry_pair for mirrored pair placement/correction around an explicit mirror plane
  • macro_place_supported_pair for mirrored pair placement/correction against one shared support surface
  • macro_cleanup_part_intersections for bounded pairwise overlap cleanup without free-form collision solving
  • macro_adjust_relative_proportion for bounded ratio repair between related objects
  • macro_adjust_segment_chain_arc for bounded arc adjustment on ordered segment chains
  • macro_finish_form for preset-driven bevel/subdivision/solidify finishing
  • reference_images for goal-scoped reference intake before bounded visual comparison
  • guided_reference_readiness on router_set_goal, router_get_status, and staged reference compare/iterate payloads so clients can see whether reference-driven stage work is actually ready
  • reference_compare_stage_checkpoint for deterministic multi-view stage comparison against attached references during manual iterative work
  • reference_iterate_stage_checkpoint for a session-aware staged correction loop that remembers prior focus, can escalate into inspect/validate when the same correction repeats, and can now target one object, many objects, a collection, or the full assembled silhouette

Current guided bootstrap surface:

  • router_set_goal
  • router_get_status
  • browse_workflows
  • reference_images
  • search_tools
  • call_tool
  • list_prompts
  • get_prompt

Current guided utility prep path:

  • bootstrap/planning search can now reach:
    • scene_get_viewport
    • scene_clean_scene
  • these utility actions stay bounded and do not reopen the full legacy surface
  • the canonical cleanup argument shape on llm-guided is keep_lights_and_cameras; older split flags are compatibility-only and should not be used as the documented public form
  • build goals should still start from router_set_goal(...), but screenshot / viewport / scene-reset requests should use the guided utility path instead

Current public aliases on llm-guided:

Internal toolllm-guided public namePublic arg changes
scene_contextcheck_sceneaction -> query
scene_inspectinspect_sceneobject_name -> target_object
scene_configureconfigure_scenesettings -> config
workflow_catalogbrowse_workflowsworkflow_name -> name, query -> search_query

Why that matters:

  • the guided profile starts from 8 visible tools instead of the full catalog
  • grouped/public tools stay easy to discover
  • hidden atomic tools remain available as infrastructure, not as the default public mental model
  • specialist families stay out of the normal guided entry layer until the macro surface is broader

Atomic Foundations And Docs

The root README.md is intentionally not the full tool catalog anymore.

The detailed tool inventory and atomic family docs should stay in docs, not on the front page. That is the right long-term structure after TASK-113.

Use these docs depending on what you need:

  • Tool Layering Policy
    • Canonical policy for atomic / macro / workflow, hidden atomic tools, goal-first usage, and vision/assert boundaries.
  • MCP Server Docs
    • Surface profiles, guided aliases, versioned contracts, and runtime/platform guidance.
  • MCP Client Config Examples
    • Ready-to-paste local MCP client config examples for guided/manual surfaces plus MLX, OpenRouter, and Gemini vision variants.
  • Vision Layer Docs
    • Runtime/backends, capture bundles, reference images, macro/workflow vision integration notes, and repo-tracked real viewport eval bundles for both direct user-view and fixed camera-perspective captures.
  • Available Tools Summary
    • Full inventory and grouped/public tool overview.
  • Tool Architecture Index
    • Maintainer-facing map of the tool families underneath the MCP surface.

If you want to see the atomic families the server is built on, start here:

Recommended interpretation:

  • keep /_docs/TOOLS/ as the maintainer-facing atomic/grouped architecture map
  • keep README.md product-facing and compact
  • keep /_docs/AVAILABLE_TOOLS_SUMMARY.md as the runtime inventory

Provider Notes

Current short version:

  • Local default: mlx_local with a Qwen VL 4B-class model path; current repo-validated baseline is mlx-community/Qwen3-VL-4B-Instruct-4bit
  • External iterative compare candidate: OpenRouter with x-ai/grok-4.20-multi-agent
  • External Gemini compare path: Google AI Studio / Gemini now uses a provider-specific narrow compare contract for staged iterative/reference-guided flows

Detailed per-provider table:

Architecture

The system is split on purpose:

  • MCP server (server/): FastMCP surface, public tool definitions, transforms, discovery, and response contracts.
  • Router (server/router/): goal interpretation, safety/correction policy, workflow matching, session context, and guided execution behavior.
  • Blender addon (blender_addon/): actual bpy execution, RPC handlers, and Blender main-thread-safe operation scheduling.

Communication happens through JSON-RPC over TCP sockets.

More detail:

Structured Contract Baseline

The server is moving critical surfaces toward machine-readable payloads instead of prose-heavy JSON strings.

Current structured-contract baseline includes:

  • macro_cutout_recess
  • macro_finish_form
  • macro_attach_part_to_surface
  • macro_align_part_with_contact
  • macro_place_supported_pair
  • macro_cleanup_part_intersections
  • macro_relative_layout
  • scene_create
  • scene_configure
  • mesh_select
  • mesh_select_targeted
  • mesh_inspect
  • scene_snapshot_state
  • scene_compare_snapshot
  • scene_measure_distance
  • scene_measure_dimensions
  • scene_measure_gap
  • scene_measure_alignment
  • scene_measure_overlap
  • scene_assert_contact
  • scene_assert_dimensions
  • scene_assert_containment
  • scene_assert_symmetry
  • scene_assert_proportion
  • router_set_goal
  • router_get_status
  • workflow_catalog

That is important for automation, auditing, and future macro/workflow composition.

Contact Truth Semantics

For contact-sensitive checks on curved or rounded forms, the truth layer now distinguishes:

  • mesh-surface contact/gap semantics when a bounded mesh-aware path is available
  • bbox fallback semantics when a mesh-aware path is not available

That means a pair can still show bbox contact while the main measured relation remains separated if the real mesh surfaces still have a visible gap. Guided hybrid truth follow-up now carries that distinction forward in operator-facing summaries instead of collapsing it into a generic "contact passed/failed" claim.

When the mesh-aware path finds a real overlap, the main measured relation also stays overlapping, so overlap rejection in scene_assert_contact(...) still works as a separate truth condition instead of collapsing into plain contact.

Structured Clarification Flow

The guided surface supports missing-input handling as part of the product contract, not as an afterthought.

  • Model-first clarification is the default for router_set_goal(...) on llm-guided: missing workflow parameters return a typed needs_input payload to the outer model first.
  • Typed fallback payloads keep the same flow usable on tool-only or compatibility clients.
  • Human/native clarification is reserved for later/fallback policy rather than the default first step of workflow execution.
  • router_set_goal(...) can ask for constrained choices, booleans, enums, or workflow confirmation.
  • partial answers survive across follow-up turns.
  • workflow_catalog import conflicts reuse the same clarification model.

Guided Handoff Contract

The guided surface now treats workflow fallback as an explicit typed contract instead of a phase side effect hidden in prose.

  • router_set_goal(...) returns guided_handoff on bounded continuation paths such as continuation_mode="guided_manual_build" and continuation_mode="guided_utility".
  • guided_handoff names the target_phase, direct_tools, supporting_tools, and discovery_tools for the next step on llm-guided.
  • workflow_import_recommended stays False on these fallback paths unless the user explicitly asks for workflow import/create behavior.
  • router_get_status(...) preserves the active guided_handoff in session diagnostics so clients can recover the intended continuation path.

Guided Reference Readiness

Reference-driven staged work now has one explicit readiness contract instead of hidden ordering assumptions.

  • router_set_goal(...) and router_get_status(...) expose guided_reference_readiness.
  • the payload reports attached_reference_count, pending_reference_count, compare_ready, iterate_ready, plus machine-readable blocking_reason and next_action
  • reference_images(action="attach", ...) can stay pending until the guided goal session is actually ready, then adopt automatically
  • if the same goal already has active refs and new ones are staged during needs_input, the staged refs stay separate from the already-active goal references until readiness returns
  • if a ready session still carries explicit pending refs for another goal, reference_images(action="list"| "remove"| "clear", ...) now treats that merged visible set consistently instead of leaving broken pending records
  • reference_compare_stage_checkpoint(...) and reference_iterate_stage_checkpoint(...) now fail fast when the session is not ready, and echo the same guided_reference_readiness payload
  • for staged compare/iterate, goal_override is no longer a session substitute; use an active guided goal session instead

Session Diagnostics

Guided/runtime payloads now expose explicit MCP session metadata:

  • router_set_goal(...) includes session_id and transport
  • router_get_status(...) includes session_id and transport
  • reference_compare_stage_checkpoint(...) includes session_id and transport
  • reference_iterate_stage_checkpoint(...) includes session_id and transport

Current runtime guidance:

  • stateful streamable HTTP is the recommended transport for longer guided runs and for debugging session-aware reference / checkpoint flows
  • recent guided-session hardening removed the known router bookkeeping path that could clobber active goal/reference session state during routed tool execution
  • if you investigate a future state-loss incident, compare session_id and transport first to distinguish:
    • transport/session reconnects
    • application-level goal resets
    • normal guided readiness blockers such as missing goal or references

Server-Side Sampling Assistants Baseline

The MCP server now has a bounded analytical assistant layer inside an active request.

Current use cases:

  • optional assistant_summary on inspection-heavy paths such as scene_snapshot_state, scene_compare_snapshot, scene_get_hierarchy, scene_get_bounding_box, and scene_get_origin_info
  • bounded repair_suggestion on router_set_goal, router_get_status, and workflow_catalog

Explicit assistant terminal states:

  • success
  • unavailable
  • masked_error
  • rejected_by_policy

The rule is strict: assistants may help summarize or suggest, but they do not override scene truth or router policy.

Versioned Surface Baseline

Public surface evolution is versioned explicitly:

Surface profileDefault contract line
legacy-manuallegacy-v1
legacy-flatlegacy-v1
llm-guidedllm-guided-v2

Compatibility note:

  • llm-guided-v1 remains selectable as a rollback line
  • workflow_catalog, scene_context, and scene_inspect participate in the guided surface evolution story

Code Mode Decision

Current benchmark baselines:

  • legacy-flat
  • llm-guided
  • code-mode-pilot

Current decision:

  • Go decision: keep code-mode-pilot as an experimental read-only surface
  • Do not make Code Mode the default path for write-heavy or geometry-destructive Blender work

Support Matrix

  • Blender: tested on Blender 5.0 in E2E coverage; addon minimum remains Blender 4.0+ on a best-effort basis.
  • Python: 3.11+
  • FastMCP task runtime: fastmcp 3.1.1 + pydocket 0.18.2
  • OS: macOS / Windows / Linux
  • Memory: router semantic features rely on a local LaBSE model and related vector infrastructure

Quick Start

1. Install the Blender addon

  1. Download blender_ai_mcp.zip from the Releases page or build it locally with python scripts/build_addon.py.
  2. Open Blender -> Edit -> Preferences -> Add-ons.
  3. Click Install... and select the zip file.
  4. Enable the addon. It starts the local Blender RPC server on port 8765.

2. Run the MCP server on the guided profile

Recommended defaults:

  • ROUTER_ENABLED=true
  • MCP_SURFACE_PROFILE=llm-guided
  • map /tmp if you want host-visible image/file outputs

Example Docker command:

docker run -i --rm \
  -v /tmp:/tmp \
  -e BLENDER_AI_TMP_INTERNAL_DIR=/tmp \
  -e BLENDER_AI_TMP_EXTERNAL_DIR=/tmp \
  -e ROUTER_ENABLED=true \
  -e MCP_SURFACE_PROFILE=llm-guided \
  -e BLENDER_RPC_HOST=host.docker.internal \
  ghcr.io/patrykiti/blender-ai-mcp:latest
docker run --rm \
  -p 8000:8000 \
  -v /tmp:/tmp \
  -e BLENDER_AI_TMP_INTERNAL_DIR=/tmp \
  -e BLENDER_AI_TMP_EXTERNAL_DIR=/tmp \
  -e ROUTER_ENABLED=true \
  -e MCP_SURFACE_PROFILE=llm-guided \
  -e MCP_TRANSPORT_MODE=streamable \
  -e MCP_HTTP_HOST=0.0.0.0 \
  -e MCP_HTTP_PORT=8000 \
  -e MCP_STREAMABLE_HTTP_PATH=/mcp \
  -e BLENDER_RPC_HOST=host.docker.internal \
  ghcr.io/patrykiti/blender-ai-mcp:latest

Example generic MCP client config:

{
  "mcpServers": {
    "blender-ai-mcp": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-v", "/tmp:/tmp",
        "-e", "BLENDER_AI_TMP_INTERNAL_DIR=/tmp",
        "-e", "BLENDER_AI_TMP_EXTERNAL_DIR=/tmp",
        "-e", "ROUTER_ENABLED=true",
        "-e", "MCP_SURFACE_PROFILE=llm-guided",
        "-e", "BLENDER_RPC_HOST=host.docker.internal",
        "ghcr.io/patrykiti/blender-ai-mcp:latest"
      ]
    }
  }
}

Network notes:

  • macOS / Windows: use host.docker.internal
  • Linux: prefer --network host with BLENDER_RPC_HOST=127.0.0.1
  • MCP_TRANSPORT_MODE=stdio keeps the current subprocess/stdio MCP mode
  • MCP_TRANSPORT_MODE=streamable starts a stateful Streamable HTTP MCP server

For broader profile/config examples, use:

Testing

Unit tests:

PYTHONPATH=. poetry run pytest tests/unit/ -v

Unit collection count:

poetry run pytest tests/unit --collect-only

E2E tests:

python3 scripts/run_e2e_tests.py

E2E collection count:

poetry run pytest tests/e2e --collect-only

Pre-commit:

poetry run pre-commit install --hook-type pre-commit --hook-type pre-push
poetry run pre-commit run --all-files

More detail:

Documentation Map

Contributing

Read CONTRIBUTING.md before opening a PR. The repo enforces Clean Architecture boundaries, typed Python, router metadata rules, and pre-commit validation.

Community And Support

If blender-ai-mcp is useful in your workflow, consider sponsoring its long-term development.

Sponsorship helps fund maintenance, docs, testing, and the higher-level reliability work that makes this repo different from raw Blender code generation: goal-first routing, curated tools, deterministic verification, and production-shaped workflow support.

Become a sponsor

Author

Patryk Ciechański

License

This project is licensed under the Apache License 2.0.

See:

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension