Immich GPU Installation Doctrine
From bare Ubuntu + NVIDIA to a fully GPU‑accelerated Immich stack with port 2283 and a modern multi‑service layout.
1. Architecture overview
This doctrine installs Immich on a GPU‑equipped Ubuntu machine (like your MSI or Johannes) using Docker, NVIDIA GPU passthrough, and the modern multi‑service architecture:
- immich_server – main API and web UI (listening on port 2283)
- immich_microservices – background workers
- immich_machine_learning – your CUDA‑enabled ML container
- immich_redis – Valkey/Redis for caching and queues
- immich_postgres – Postgres with vector extensions
All public access goes to: http://<server-ip>:2283
2283:2283 (host → container).2. Requirements
2.1. Hardware and OS
- OS: Ubuntu (22.04+ recommended)
- GPU: NVIDIA GPU (e.g. RTX 4060 / 4070, etc.)
- RAM: 16GB+ recommended
- Disk: SSD/NVMe, with enough space for DB + photos
2.2. You should have
- Root or sudo access on the machine
- Working network (for pulling Docker images)
3. Install NVIDIA driver and verify GPU
3.1. Install recommended NVIDIA driver
On a fresh Ubuntu install, enable the official drivers and install the recommended NVIDIA driver:
sudo apt update
sudo apt install -y ubuntu-drivers-common
# Show recommended driver
ubuntu-drivers devices
# Install the recommended driver (example)
sudo ubuntu-drivers autoinstall
sudo reboot3.2. Verify GPU is working
After reboot, confirm the driver is active and the GPU is visible:
nvidia-smiYou should see something like your MSI/Johannes output: driver version, CUDA version, and the GPU listed with temperature, power, and memory usage.
nvidia-smi fails or shows “no devices”, fix this before moving on. The entire stack depends on a working NVIDIA driver.4. Install Docker and NVIDIA container runtime
4.1. Install Docker Engine
Install Docker using the official packages:
sudo apt update
sudo apt install -y \
ca-certificates \
curl \
gnupg \
lsb-release
# Docker GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Docker repo
echo \
"deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Enable and start Docker
sudo systemctl enable docker
sudo systemctl start docker
# Optional: add your user to the docker group
sudo usermod -aG docker "$USER"
# Log out and back in after this.4.2. Install NVIDIA container toolkit
This enables --gpus all support and the deploy.resources.reservations.devices block in Compose.
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install -y nvidia-container-toolkit
# Configure Docker to use NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker4.3. Verify GPU works inside Docker
Run a simple GPU test container:
docker run --rm --gpus all nvidia/cuda:12.3.2-base-ubuntu22.04 nvidia-sminvidia-smi.5. Folder layout and environment file
5.1. Create base directories
Use a clean layout for Immich and related data:
mkdir -p ~/immich/docker
mkdir -p ~/immich/postgres
mkdir -p ~/immich/library
mkdir -p ~/immich/model-cache
cd ~/immich/docker5.2. Create the .env file
Inside ~/immich/docker, create .env:
nano .envExample .env content:
# -------- Immich core settings --------
IMMICH_VERSION=release
# External URL (http only, no proxy)
IMMICH_SERVER_URL=http://localhost:2283
# -------- Database --------
DB_HOST=immich_postgres
DB_PORT=5432
DB_USERNAME=immich
DB_PASSWORD=immich_password_change_me
DB_DATABASE_NAME=immich
DB_DATA_LOCATION=/home/YOUR_USER/immich/postgres
# -------- Redis / Valkey --------
REDIS_HOSTNAME=immich_redis
REDIS_PORT=6379
# -------- Storage paths inside containers --------
IMMICH_UPLOAD_LOCATION=/usr/src/app/upload
# -------- Misc --------
TZ=Europe/Amsterdam/home/YOUR_USER with your actual username. You can also generate stronger DB credentials; just keep them in sync with the compose file.6. Full docker-compose.yml (GPU, port 2283)
In ~/immich/docker, create docker-compose.yml with the following content. This uses:
- Modern architecture A (server + microservices + ML + Redis + Postgres)
- Your GPU ML block (directly copied from Johannes/MSI)
- Port 2283:2283 for the Immich server
services:
immich_server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
restart: always
ports:
- "2283:2283"
env_file:
- .env
depends_on:
- immich_machine_learning
- immich_redis
- immich_postgres
immich_microservices:
container_name: immich_microservices
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
restart: always
command: ["start-microservices.sh"]
env_file:
- .env
depends_on:
- immich_machine_learning
- immich_redis
- immich_postgres
immich_machine_learning:
container_name: immich_machine_learning
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
restart: always
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
environment:
- NVIDIA_DISABLE_REQUIRE=1
volumes:
- model-cache:/cache
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
healthcheck:
disable: false
immich_redis:
container_name: immich_redis
image: docker.io/valkey/valkey:9
restart: always
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
immich_postgres:
container_name: immich_postgres
image: ghcr.io/immich-app/postgres:14-vector
restart: always
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
volumes:
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
shm_size: 128mb
healthcheck:
disable: false
volumes:
model-cache:7. Start Immich and verify
7.1. Start the stack
From ~/immich/docker:
docker compose up -dDocker will pull the images and start five containers: immich_server, immich_microservices, immich_machine_learning, immich_redis, immich_postgres.
7.2. Check container status
docker psAll Immich containers should show STATUS as Up.
7.3. Access the Immich web UI
In your browser, go to:
http://<server-ip>:2283On first launch, you’ll be prompted to create an admin account and complete initial setup.
8. Verify GPU acceleration
8.1. Watch ML container logs
Tail the ML container logs to see model loading and GPU usage:
docker logs -f immich_machine_learningYou should see logs about downloading/loading models and potentially CUDA initialization.
8.2. Check GPU usage with nvidia-smi
Run:
nvidia-smiYou should see a python process using VRAM, similar to your MSI output:
| GPU GI CI PID Type Process name GPU Memory |
| 0 N/A N/A 1234 C python 500MiB |python process is the Immich ML container using the GPU. When you upload photos, GPU utilization and VRAM usage will spike as recognition runs.8.3. Functional test
- Log into Immich via browser.
- Upload a batch of photos.
- Watch
docker logs -f immich_machine_learningandnvidia-smiwhile upload/processing runs.
9. Troubleshooting patterns
9.1. Container name conflicts
If you see an error like:
Conflict. The container name "immich_microservices" is already in use...Remove the old container:
docker rm -f immich_microservicesThen bring the stack up again:
docker compose up -d9.2. Orphan containers from previous setups
List Immich containers:
docker ps -a | grep immichIf you see old containers from previous configs and you don’t need them:
docker rm -f immich_server immich_microservices immich_machine_learning immich_postgres immich_redis9.3. GPU not used by ML container
- Check
nvidia-smi– is the GPU visible at all? - Check
docker run --rm --gpus all nvidia/cuda:... nvidia-smi– does GPU work inside a test container? - Inspect
docker logs immich_machine_learning– look for CUDA related errors.
9.4. Port already in use
If 2283 is already in use on the host:
sudo lsof -i :2283Either stop the conflicting process or change the host port in docker-compose.yml to something else, e.g.:
ports:
- "9100:2283"10. Doctrine summary
This block codifies the pattern you’ve already proven on Johannes and the MSI:
- Establish a clean GPU foundation.
Working NVIDIA driver on Ubuntu and a passingnvidia-smiare non‑negotiable. - Layer Docker and NVIDIA runtime on top.
Validate GPU usage inside a test container before deploying Immich. - Use a standardized Immich layout.
Server + microservices + ML + Redis + Postgres, no ad‑hoc variations per node. - Reuse the same GPU ML block everywhere.
Yourimmich_machine_learningconfig is now a portable doctrine unit for any GPU node. - Keep ports explicit and consistent.
Internal port 2283 is your canonical truth; host mapping is a conscious choice.
docker-compose.yml, a single .env, and a repeatable pattern you can apply to any future GPU node in the fleet.
Keine Kommentare gefunden