Add Dockerfile for Docker MCP Registry#198
Conversation
localhost inside a container cannot reach the host Blender instance. Default to host.docker.internal so the server connects to the host.
Create a dedicated appuser and switch to it before ENTRYPOINT so the server runs unprivileged.
Set DISABLE_TELEMETRY=true so the containerized server does not phone home. Users can override with -e DISABLE_TELEMETRY=false.
uvx needs a writable cache directory at ~/.cache/uv.
Review Summary by QodoAdd Dockerfile for Docker MCP Registry integration
WalkthroughsDescription• Adds Dockerfile for containerized deployment via Docker MCP Registry • Configures container to connect to host Blender instance • Runs container as non-root user for improved security • Disables telemetry by default in containerized environments Diagramflowchart LR
A["Python 3.12 Base Image"] --> B["Install uv Package Manager"]
B --> C["Create Non-Root appuser"]
C --> D["Configure Environment Variables"]
D --> E["Set ENTRYPOINT to uvx blender-mcp"]
F["BLENDER_HOST: host.docker.internal"] -.-> D
G["BLENDER_PORT: 9876"] -.-> D
H["DISABLE_TELEMETRY: true"] -.-> D
File Changes1. Dockerfile
|
Code Review by Qodo
1.
|
📝 WalkthroughWalkthroughIntroduces a new Dockerfile that builds a Python 3.12-slim image, installs the uv package, creates a non-root app user, sets Blender networking and telemetry environment variables, and uses an ENTRYPOINT to run uvx with blender-mcp==1.5.5. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Dockerfile`:
- Line 3: The Dockerfile currently runs "pip install --no-cache-dir uv" which
installs the published tool at container startup instead of baking the
repository source into the image; change the build to COPY the repository
sources into the image (e.g., into /app) and run pip install against the local
package (e.g., pip install . or pip install /app) so the image contains the PR
code rather than resolving the package at runtime; update the RUN step that
calls "pip install --no-cache-dir uv" to instead install the local package and
remove reliance on the uv/uvx tool alias at startup.
- Line 7: The Dockerfile sets ENV BLENDER_HOST=host.docker.internal which works
on Docker Desktop but fails on Linux Docker Engine unless host.docker.internal
is mapped; update the run/Compose documentation and any README or deployment
instructions to show how Linux users must add the host-gateway mapping (e.g.,
include the docker run flag --add-host=host.docker.internal:host-gateway and the
Compose extra_hosts entry "- \"host.docker.internal:host-gateway\"") so
containers can reach the host when BLENDER_HOST uses host.docker.internal.
| @@ -0,0 +1,13 @@ | |||
| FROM python:3.12-slim | |||
|
|
|||
| RUN pip install --no-cache-dir uv | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files | head -20Repository: ahujasid/blender-mcp
Length of output: 368
🏁 Script executed:
fd -t f "^(pyproject\.toml|README\.md|Dockerfile)$" --max-one-resultRepository: ahujasid/blender-mcp
Length of output: 297
🏁 Script executed:
cat DockerfileRepository: ahujasid/blender-mcp
Length of output: 344
🌐 Web query:
uv uvx tool run documentation astral
💡 Result:
Astral uv docs for uvx / uv tool run (running Python CLI tools in an isolated, ephemeral env):
- Tools concept guide (recommended starting point): explains what “tools” are, how
uvxis exactly an alias ofuv tool run, caching behavior, and how it differs fromuv run. [1] - CLI reference (
uv tool run/uvx): full command syntax and options, including version pinning like<package>@<version>and behavior when a tool is already installed. [2] - uv overview page: quick examples of
uvx <tool>anduv tool installto contrast ephemeral execution vs installation. [3]
Sources: [1] Tools | uv (Astral docs) https://docs.astral.sh/uv/concepts/tools/ • [2] Commands / CLI reference | uv https://docs.astral.sh/uv/reference/cli/ • [3] uv documentation home https://docs.astral.sh/uv/
🏁 Script executed:
cat pyproject.tomlRepository: ahujasid/blender-mcp
Length of output: 909
Bake the local package into the image instead of resolving it at startup.
This Dockerfile never copies or installs the repository sources, so the resulting image is not actually built from this PR. uvx is an alias for uv tool run and executes the command from a cached, disposable tool environment, which means cold starts depend on registry/network availability and will run whatever published blender-mcp version is available instead of the code you just built.
Proposed fix
FROM python:3.12-slim
RUN pip install --no-cache-dir uv
RUN groupadd --system appuser && useradd --system --gid appuser --create-home appuser
+
+WORKDIR /app
+COPY pyproject.toml README.md /app/
+COPY src /app/src/
+RUN pip install --no-cache-dir /app
ENV BLENDER_HOST=host.docker.internal
ENV BLENDER_PORT=9876
ENV DISABLE_TELEMETRY=true
USER appuser
-ENTRYPOINT ["uvx", "blender-mcp"]
+ENTRYPOINT ["blender-mcp"]Also applies to: 13-13
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dockerfile` at line 3, The Dockerfile currently runs "pip install
--no-cache-dir uv" which installs the published tool at container startup
instead of baking the repository source into the image; change the build to COPY
the repository sources into the image (e.g., into /app) and run pip install
against the local package (e.g., pip install . or pip install /app) so the image
contains the PR code rather than resolving the package at runtime; update the
RUN step that calls "pip install --no-cache-dir uv" to instead install the local
package and remove reliance on the uv/uvx tool alias at startup.
|
|
||
| RUN groupadd --system appuser && useradd --system --gid appuser --create-home appuser | ||
|
|
||
| ENV BLENDER_HOST=host.docker.internal |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Docker host.docker.internal Linux Docker Engine support networking
💡 Result:
Does host.docker.internal work on Linux with Docker Engine?
Yes—but not automatically like on Docker Desktop.
What’s supported (Linux Docker Engine)
- Docker Engine added support for
host.docker.internalon Linux (via the “host-gateway” mechanism) in the 20.10 line. (docs.docker.com) - To make it work, you typically map it explicitly to the host gateway IP:
docker run
docker run --add-host=host.docker.internal:host-gateway …docker compose
services:
app:
extra_hosts:
- "host.docker.internal:host-gateway"Docker Desktop note
- On Docker Desktop,
host.docker.internalis provided as a special DNS name that resolves to the host automatically. (docs.docker.com)
Alternative approach (Linux-only)
- If you actually want the container to share the host network stack, use host networking (
--network host/network_mode: host). This is a Linux host feature. (docs.docker.com)
Citations:
- 1: https://docs.docker.com/engine/release-notes/20.10/?utm_source=openai
- 2: https://docs.docker.com/desktop/features/networking/networking-how-tos/?utm_source=openai
- 3: https://docs.docker.com/engine/network/drivers/host/?utm_source=openai
🌐 Web query:
Docker Desktop vs Linux Docker Engine host.docker.internal host-gateway
💡 Result:
Key difference: where the Docker daemon runs
- Docker Desktop (macOS/Windows, and Desktop for Linux) runs the Linux Docker Engine inside a lightweight VM, not directly on your host OS. Docker Desktop provides special DNS names under
docker.internal, includinghost.docker.internal, which is made to resolve to “the host” from inside containers. [1][2] - Docker Engine on Linux (native, no Desktop) runs directly on the Linux host. There is no built-in DNS name
host.docker.internalunless you add it yourself. The standard pattern is to map it to the container network’s host-side gateway usinghost-gateway. [3]
host.docker.internal: Desktop vs native Linux Engine
Docker Desktop
host.docker.internalworks out of the box (Docker Desktop provides it). [1][2]
Native Docker Engine on Linux
- Use the special keyword
host-gatewayto create the mapping:Docker documentsdocker run --add-host=host.docker.internal:host-gateway …
host-gatewayas a special value for--add-host. [3] - In Docker Compose:
(This relies on the same Engine feature.) [3]
services: app: extra_hosts: - "host.docker.internal:host-gateway"
What host-gateway actually targets (Linux Engine)
host-gatewayresolves to the gateway IP address for the container’s network (commonly the host-side bridge interface, e.g.docker0on the default bridge). Docker also lets you override what it resolves to viadockerd --host-gateway-ip/host-gateway-ipin daemon config. [3]
Version note
- The
host-gatewaymechanism is associated with Docker Engine v20.10+ era behavior (and has release-notes history around--add-host=…:host-gateway). [4]
Sources: Docker Desktop networking/how-tos and internals [1][2], Docker dockerd reference documenting host-gateway and host-gateway-ip [3], Docker Engine 20.10 release notes [4].
Add documentation for Linux Docker Engine users to map host.docker.internal.
host.docker.internal works automatically on Docker Desktop but requires explicit configuration on Linux Docker Engine. Without the host-gateway mapping, containers will fail to reach the host on Linux systems. Update run/Compose documentation to include the required mappings for Linux users:
docker run --add-host=host.docker.internal:host-gateway …extra_hosts:
- "host.docker.internal:host-gateway"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dockerfile` at line 7, The Dockerfile sets ENV
BLENDER_HOST=host.docker.internal which works on Docker Desktop but fails on
Linux Docker Engine unless host.docker.internal is mapped; update the
run/Compose documentation and any README or deployment instructions to show how
Linux users must add the host-gateway mapping (e.g., include the docker run flag
--add-host=host.docker.internal:host-gateway and the Compose extra_hosts entry
"- \"host.docker.internal:host-gateway\"") so containers can reach the host when
BLENDER_HOST uses host.docker.internal.
uvx blender-mcp without a version pin can pull a different version than what the repo commit represents. Pin to 1.5.5 to match.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
Dockerfile (1)
13-13:⚠️ Potential issue | 🟠 MajorInstall the CLI during build instead of resolving it in
ENTRYPOINT.
uvxis justuv tool run; it executes tools from a temporary environment, and uv’s docs explicitly call out installation as the better fit when a tool should already be available inside a Docker image. Keeping this inENTRYPOINTleaves first start dependent on package resolution and cache population. (docs.astral.sh)Possible direction
RUN pip install --no-cache-dir uv @@ +ENV PATH="/home/appuser/.local/bin:${PATH}" USER appuser +RUN uv tool install blender-mcp==1.5.5 -ENTRYPOINT ["uvx", "blender-mcp==1.5.5"] +ENTRYPOINT ["blender-mcp"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` at line 13, The ENTRYPOINT currently runs the tool via uvx ("uvx", "blender-mcp==1.5.5") which defers package resolution to container start; instead install the CLI at image-build time and make the installed binary the entrypoint. Add a build-step that installs blender-mcp==1.5.5 (for example via pip install blender-mcp==1.5.5 or the project's recommended install command) and then change the ENTRYPOINT to the installed executable (replace the "uvx" ENTRYPOINT with the direct CLI name), ensuring any required PATH or permissions are set so the executable runs as the container's ENTRYPOINT.
🧹 Nitpick comments (1)
Dockerfile (1)
3-3: Pinuvtoo if this image is meant to be reproducible.
blender-mcpis pinned, butRUN pip install --no-cache-dir uvstill pulls whateveruvrelease is current at build time. That means rebuilds can drift even when the app version stays fixed. Astral documents version-specific installs and also publishes a dedicated Docker image foruv, so it’s worth pinning the toolchain as well. (docs.astral.sh)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` at line 3, The Dockerfile currently installs uv without a version in the RUN pip install --no-cache-dir uv step, allowing builds to drift; update that instruction to pin the uv toolchain (either by specifying an exact pip package version like uv==<version> or by switching to Astral's published uv base image/tag) and include the chosen version/tag in the Dockerfile so rebuilds are reproducible and deterministic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@Dockerfile`:
- Line 13: The ENTRYPOINT currently runs the tool via uvx ("uvx",
"blender-mcp==1.5.5") which defers package resolution to container start;
instead install the CLI at image-build time and make the installed binary the
entrypoint. Add a build-step that installs blender-mcp==1.5.5 (for example via
pip install blender-mcp==1.5.5 or the project's recommended install command) and
then change the ENTRYPOINT to the installed executable (replace the "uvx"
ENTRYPOINT with the direct CLI name), ensuring any required PATH or permissions
are set so the executable runs as the container's ENTRYPOINT.
---
Nitpick comments:
In `@Dockerfile`:
- Line 3: The Dockerfile currently installs uv without a version in the RUN pip
install --no-cache-dir uv step, allowing builds to drift; update that
instruction to pin the uv toolchain (either by specifying an exact pip package
version like uv==<version> or by switching to Astral's published uv base
image/tag) and include the chosen version/tag in the Dockerfile so rebuilds are
reproducible and deterministic.
Summary
Adds a Dockerfile to enable containerized deployment via the Docker MCP Registry.
uvxto run blender-mcp, consistent with the project's recommended installation approachBLENDER_HOST=host.docker.internalso the container can reach Blender on the hostBLENDER_PORT=9876matching the defaultDISABLE_TELEMETRY=true) for containerized environmentsappuser) for securityTest plan
docker build -t blender-mcp .builds successfullydocker run --rm blender-mcpstarts the MCP server (connects to Blender if addon is running)task validate,task build --tools)Summary by CodeRabbit