Immich with GPU & Microservices (Hybrid Architecture on Docker)
1. System prerequisites
1.1. Base OS
Tested baseline:
- Ubuntu 22.04 LTS (or similar modern Debian/Ubuntu)
- NVIDIA GPU with proprietary drivers installed
- Docker-compatible kernel
sudo apt update && sudo apt upgrade -y
sudo reboot
1.2. NVIDIA drivers
Install NVIDIA drivers from the Ubuntu repositories (or your preferred method):
sudo ubuntu-drivers autoinstall
sudo reboot
After reboot, verify:
nvidia-smi
You should see your GPU listed with driver and CUDA versions.
2. Install Docker & Docker Compose
2.1. Install Docker Engine
sudo apt update
# Required packages
sudo apt install -y \
ca-certificates \
curl \
gnupg \
lsb-release
# Docker GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
# Install Docker
sudo apt install -y \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin
2.2. Enable and test Docker
sudo systemctl enable docker
sudo systemctl start docker
sudo docker run --rm hello-world
2.3. Add your user to the docker group (optional)
sudo usermod -aG docker "$USER"
# Log out and back in, or:
newgrp docker
3. Install NVIDIA Container Toolkit (GPU passthrough)
3.1. Add NVIDIA Container Toolkit repository
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
3.2. Install and configure NVIDIA Container Toolkit
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
3.3. Verify GPU passthrough in Docker
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
If you see your GPU, Docker + NVIDIA runtime are correctly configured.
4. Prepare Immich directory and environment
4.1. Create project directory
mkdir -p ~/immich-hybrid
cd ~/immich-hybrid
4.2. Clone Immich repository (for reference)
We’ll clone the Immich repo to have access to scripts and reference files, but we’ll use a custom docker-compose.yml tailored to the hybrid architecture.
git clone https://github.com/immich-app/immich.git
You can keep this repo next to your compose file, e.g.:
~/immich-hybrid/
docker-compose.yml
.env
immich/ # cloned repo
4.3. Create the .env file
Create a .env file in ~/immich-hybrid:
nano .env
Example minimal content (adapt to your needs):
# Immich version
IMMICH_VERSION=release
# Uploads and database storage paths on host
UPLOAD_LOCATION=/srv/immich/uploads
DB_DATA_LOCATION=/srv/immich/db
# Database credentials
DB_USERNAME=immich
DB_PASSWORD=immich_password_here
DB_DATABASE_NAME=immich
# Optional: timezone
TZ=Europe/Amsterdam
Create the directories referenced above:
sudo mkdir -p /srv/immich/uploads
sudo mkdir -p /srv/immich/db
sudo chown -R "$USER":"$USER" /srv/immich
5. Hybrid docker-compose.yml with GPU & microservices
Now we define a hybrid Immich stack:
immich-server– main API + web backend, GPU transcodingimmich_microservices– background jobs, usingstart-microservices.shimmich-machine-learning– GPU-accelerated MLredis– queue + cachedatabase– Postgres with vector extensions
redis, database, immich-machine-learning) are intentionally kept in the older style to match an existing working setup. Do not blindly mix this with the official modern compose without understanding the differences.5.1. Create docker-compose.yml
nano docker-compose.yml
Paste the following:
name: immich
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
volumes:
- ${UPLOAD_LOCATION}:/data
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
ports:
- "9100:2283"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
runtime: nvidia
environment:
- IMMICH_FFMPEG_HWACCEL=true
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
devices:
- /dev/dri:/dev/dri
depends_on:
- redis
- database
- immich-machine-learning
- immich_microservices
restart: always
healthcheck:
disable: false
immich_microservices:
container_name: immich_microservices
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
restart: always
command: ["start-microservices.sh"]
env_file:
- .env
# Note: no depends_on here to avoid mismatches in hybrid setups
immich-machine-learning:
container_name: immich_machine_learning
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
restart: always
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
environment:
- NVIDIA_DISABLE_REQUIRE=1
volumes:
- model-cache:/cache
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
healthcheck:
disable: false
redis:
container_name: immich_redis
image: docker.io/valkey/valkey:9@sha256:546304417feac0874c3dd576e0952c6bb8f06bb4093ea0c9ca303c73cf458f63
restart: always
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
database:
container_name: immich_postgres
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:bcf63357191b76a916ae5eb93464d65c07511da41e3bf7a8416db519b40b1c23
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
volumes:
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
shm_size: 128mb
restart: always
healthcheck:
disable: false
volumes:
model-cache:
5.2. Key points in this compose
- GPU passthrough is enabled on:
immich-server(transcoding)immich-machine-learning(ML inference)
- Microservices uses:
This is critical. Without this, it would try to run as another server instance.command: ["start-microservices.sh"] - Service names:
redisanddatabaseare intentionally simple.immich-machine-learninguses a hyphen in the service name.container_namevalues are cosmetic and can differ from service names.
- Port mapping:
- Immich is exposed on
http://:9100
- Immich is exposed on
6. Start Immich
6.1. Bring up the stack
cd ~/immich-hybrid
docker compose pull
docker compose up -d
6.2. Check container status
docker compose ps
You should see:
immich_server– runningimmich_microservices– runningimmich_machine_learning– runningimmich_redis– runningimmich_postgres– running
6.3. Access the Immich web UI
Open your browser and go to:
http://:9100
Complete the initial setup (admin user, etc.).
7. Verify GPU usage
7.1. Check GPU from the host
nvidia-smi
While uploading or transcoding media, you should see processes from the Immich containers.
7.2. Check inside the Immich server container
docker exec -it immich_server nvidia-smi
If this shows your GPU, the container has proper GPU access.
8. Troubleshooting
8.1. Microservices not starting
- Symptom:
immich_microservicesexits immediately. - Check logs:
docker logs immich_microservices
Common causes:
- Wrong
command(must be["start-microservices.sh"]) - Broken or missing
.env - Redis or database unreachable (check
redisanddatabasecontainers)
8.2. Server fails when adding microservices
If the server collapses when you add microservices, check:
- That you are not using
depends_onwith wrong service names. - That
immich_microservicesis not trying to run as a second server instance.
8.3. GPU not used
- Verify NVIDIA runtime is configured:
Should contain something like:cat /etc/docker/daemon.json{ "runtimes": { "nvidia": { "path": "nvidia-container-runtime", "runtimeArgs": [] } } } - Ensure
deploy.resources.reservations.devicesandruntime: nvidiaare present on GPU-enabled services. - Ensure
/dev/dri:/dev/driis mapped forimmich-server.
9. Recap
This tutorial gave you a complete, working hybrid Immich deployment with:
- Docker + Docker Compose installed and configured
- NVIDIA Container Toolkit for GPU passthrough
- Immich server with GPU transcoding
- Immich machine learning with GPU inference
- Microservices enabled via
start-microservices.sh - Redis and Postgres using a known-good image
- A clean, reproducible
docker-compose.yml
You can now extend this into a doctrine block, adapt it for production, or evolve it later into the full modern Immich architecture with proxy and web containers.
No se encontraron comentarios