Immich with GPU & Microservices

Googled777 avatar   
Googled777
This tutorial walks you through installing Immich on a single GPU node using Docker and Docker Compose


Immich with GPU & Microservices (Hybrid Architecture on Docker)


1. System prerequisites

1.1. Base OS

Tested baseline:

  • Ubuntu 22.04 LTS (or similar modern Debian/Ubuntu)
  • NVIDIA GPU with proprietary drivers installed
  • Docker-compatible kernel
If you’re on a fresh install, make sure your system is fully updated first:
sudo apt update && sudo apt upgrade -y
sudo reboot

1.2. NVIDIA drivers

Install NVIDIA drivers from the Ubuntu repositories (or your preferred method):

sudo ubuntu-drivers autoinstall
sudo reboot

After reboot, verify:

nvidia-smi

You should see your GPU listed with driver and CUDA versions.


2. Install Docker & Docker Compose

2.1. Install Docker Engine

sudo apt update

# Required packages
sudo apt install -y \
  ca-certificates \
  curl \
  gnupg \
  lsb-release

# Docker GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
  sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Docker repository
echo \
  "deb [arch=$(dpkg --print-architecture) \
  signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

# Install Docker
sudo apt install -y \
  docker-ce \
  docker-ce-cli \
  containerd.io \
  docker-buildx-plugin \
  docker-compose-plugin

2.2. Enable and test Docker

sudo systemctl enable docker
sudo systemctl start docker

sudo docker run --rm hello-world

2.3. Add your user to the docker group (optional)

sudo usermod -aG docker "$USER"
# Log out and back in, or:
newgrp docker

3. Install NVIDIA Container Toolkit (GPU passthrough)

3.1. Add NVIDIA Container Toolkit repository

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
  sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit.gpg

curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt update

3.2. Install and configure NVIDIA Container Toolkit

sudo apt install -y nvidia-container-toolkit

sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

3.3. Verify GPU passthrough in Docker

docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi

If you see your GPU, Docker + NVIDIA runtime are correctly configured.


4. Prepare Immich directory and environment

4.1. Create project directory

mkdir -p ~/immich-hybrid
cd ~/immich-hybrid

4.2. Clone Immich repository (for reference)

We’ll clone the Immich repo to have access to scripts and reference files, but we’ll use a custom docker-compose.yml tailored to the hybrid architecture.

git clone https://github.com/immich-app/immich.git

You can keep this repo next to your compose file, e.g.:

~/immich-hybrid/
  docker-compose.yml
  .env
  immich/          # cloned repo

4.3. Create the .env file

Create a .env file in ~/immich-hybrid:

nano .env

Example minimal content (adapt to your needs):

# Immich version
IMMICH_VERSION=release

# Uploads and database storage paths on host
UPLOAD_LOCATION=/srv/immich/uploads
DB_DATA_LOCATION=/srv/immich/db

# Database credentials
DB_USERNAME=immich
DB_PASSWORD=immich_password_here
DB_DATABASE_NAME=immich

# Optional: timezone
TZ=Europe/Amsterdam

Create the directories referenced above:

sudo mkdir -p /srv/immich/uploads
sudo mkdir -p /srv/immich/db
sudo chown -R "$USER":"$USER" /srv/immich

5. Hybrid docker-compose.yml with GPU & microservices

Now we define a hybrid Immich stack:

  • immich-server – main API + web backend, GPU transcoding
  • immich_microservices – background jobs, using start-microservices.sh
  • immich-machine-learning – GPU-accelerated ML
  • redis – queue + cache
  • database – Postgres with vector extensions
Important: Service names (redis, database, immich-machine-learning) are intentionally kept in the older style to match an existing working setup. Do not blindly mix this with the official modern compose without understanding the differences.

5.1. Create docker-compose.yml

nano docker-compose.yml

Paste the following:

name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    volumes:
      - ${UPLOAD_LOCATION}:/data
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - "9100:2283"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    runtime: nvidia
    environment:
      - IMMICH_FFMPEG_HWACCEL=true
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
    devices:
      - /dev/dri:/dev/dri
    depends_on:
      - redis
      - database
      - immich-machine-learning
      - immich_microservices
    restart: always
    healthcheck:
      disable: false

  immich_microservices:
    container_name: immich_microservices
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    restart: always
    command: ["start-microservices.sh"]
    env_file:
      - .env
    # Note: no depends_on here to avoid mismatches in hybrid setups

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
    restart: always
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    environment:
      - NVIDIA_DISABLE_REQUIRE=1
    volumes:
      - model-cache:/cache
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    healthcheck:
      disable: false

  redis:
    container_name: immich_redis
    image: docker.io/valkey/valkey:9@sha256:546304417feac0874c3dd576e0952c6bb8f06bb4093ea0c9ca303c73cf458f63
    restart: always
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5

  database:
    container_name: immich_postgres
    image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:bcf63357191b76a916ae5eb93464d65c07511da41e3bf7a8416db519b40b1c23
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      POSTGRES_INITDB_ARGS: '--data-checksums'
    volumes:
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    shm_size: 128mb
    restart: always
    healthcheck:
      disable: false

volumes:
  model-cache:

5.2. Key points in this compose

  • GPU passthrough is enabled on:
    • immich-server (transcoding)
    • immich-machine-learning (ML inference)
  • Microservices uses:
    command: ["start-microservices.sh"]
    This is critical. Without this, it would try to run as another server instance.
  • Service names:
    • redis and database are intentionally simple.
    • immich-machine-learning uses a hyphen in the service name.
    • container_name values are cosmetic and can differ from service names.
  • Port mapping:
    • Immich is exposed on http://:9100

6. Start Immich

6.1. Bring up the stack

cd ~/immich-hybrid

docker compose pull
docker compose up -d

6.2. Check container status

docker compose ps

You should see:

  • immich_server – running
  • immich_microservices – running
  • immich_machine_learning – running
  • immich_redis – running
  • immich_postgres – running

6.3. Access the Immich web UI

Open your browser and go to:

http://:9100

Complete the initial setup (admin user, etc.).


7. Verify GPU usage

7.1. Check GPU from the host

nvidia-smi

While uploading or transcoding media, you should see processes from the Immich containers.

7.2. Check inside the Immich server container

docker exec -it immich_server nvidia-smi

If this shows your GPU, the container has proper GPU access.


8. Troubleshooting

8.1. Microservices not starting

  • Symptom: immich_microservices exits immediately.
  • Check logs:
docker logs immich_microservices

Common causes:

  • Wrong command (must be ["start-microservices.sh"])
  • Broken or missing .env
  • Redis or database unreachable (check redis and database containers)

8.2. Server fails when adding microservices

If the server collapses when you add microservices, check:

  • That you are not using depends_on with wrong service names.
  • That immich_microservices is not trying to run as a second server instance.

8.3. GPU not used

  • Verify NVIDIA runtime is configured:
    cat /etc/docker/daemon.json
    Should contain something like:
    {
      "runtimes": {
        "nvidia": {
          "path": "nvidia-container-runtime",
          "runtimeArgs": []
        }
      }
    }
  • Ensure deploy.resources.reservations.devices and runtime: nvidia are present on GPU-enabled services.
  • Ensure /dev/dri:/dev/dri is mapped for immich-server.

9. Recap

This tutorial gave you a complete, working hybrid Immich deployment with:

  • Docker + Docker Compose installed and configured
  • NVIDIA Container Toolkit for GPU passthrough
  • Immich server with GPU transcoding
  • Immich machine learning with GPU inference
  • Microservices enabled via start-microservices.sh
  • Redis and Postgres using a known-good image
  • A clean, reproducible docker-compose.yml

You can now extend this into a doctrine block, adapt it for production, or evolve it later into the full modern Immich architecture with proxy and web containers.

0 Comentarios

No se encontraron comentarios