Doctrine block
Optimizing Immich Upload Performance (Images and Video)
Immich rarely saturates a fast fiber connection during uploads, especially with video-heavy libraries. This is not a fault in your hardware or line, but a direct consequence of how Immich processes files on upload. This page explains the logic behind that behavior and how to tune your setup for maximum practical throughput.
1. Why Immich uploads are slower than raw fiber speed
1.1 Uploads are not simple file copies
Immich is not a dumb file drop. As soon as a file reaches the server, Immich performs multiple operations before an upload is considered "done":
- Hashing: calculate checksums to detect duplicates and ensure integrity.
- Metadata extraction: read EXIF, timestamps, camera details, GPS data.
- Database writes: create or update asset records in Postgres.
- Job scheduling: enqueue Smart Search, thumbnail generation, and video processing jobs.
- Disk writes: write the file to storage in chunks and finalize it safely.
All of this work consumes CPU, RAM, disk I/O, and database capacity. Even if your fiber line can push hundreds of megabits per second, the bottleneck is often processing, not the raw network.
1.2 Videos are heavier than images
Video uploads feel slower because each video triggers extra processing steps:
- Container and codec analysis: inspect the video format, codecs, and duration.
- Thumbnails and previews: generate stills and preview clips.
- Optional transcoding: convert to more compatible or efficient formats.
- Additional metadata: store resolution, frame rate, bit rate, and audio data.
These steps can involve both CPU and GPU, and they compete with upload throughput for the same resources.
1.3 Browser upload adds its own limits
When uploading from a browser, performance is shaped by browser behavior:
- Chunking and buffering: browsers send data in controlled chunks, not at NIC line rate.
- Limited parallel uploads: browsers restrict how many concurrent connections a site can use.
- Background throttling: inactive tabs or background windows may be deprioritized.
- Memory pressure: very large drag-and-drop batches can stress the browser itself.
This makes browser uploads convenient but rarely optimal for maximum throughput.
1.4 The database is a hidden bottleneck
Each uploaded file generates multiple database operations:
- Asset record: one or more rows describing the media item.
- Job records: entries for background processing queues.
- Index updates: search and Smart Search metadata.
- Transactions and commits: ensure consistency and durability.
Postgres is fast, but not as fast as a fiber link moving raw bytes. On busy systems, the database can become the primary limiter of upload throughput.
1.5 Smart Search and background jobs compete for resources
Immich continuously runs background work, especially when the Smart Search queue is non-empty:
- Embeddings: CLIP or similar models generate semantic vectors for search.
- Face recognition: detect and cluster faces.
- Object detection: identify objects and scenes.
- Transcoding and thumbnails: prepare assets for fast playback and preview.
These tasks consume CPU, GPU, and disk I/O. When the queue is large, background work naturally reduces the amount of capacity available for new uploads.
2. Measuring real Immich upload behavior
2.1 Watching containers with docker stats
On the Immich host, you can use docker stats to see how containers behave during heavy uploads:
docker statsPay special attention to:
- immich_server: CPU usage and memory; high values indicate active upload and indexing.
- immich_postgres: CPU and block I/O; this reflects database pressure.
- immich_machine_learning: memory and CPU; this shows Smart Search and ML workload.
- immich_redis: typically low usage, but spikes indicate heavy caching activity.
2.2 Watching GPU load with nvidia-smi
For GPU-accelerated setups, nvidia-smi reveals how Immich uses the GPU:
watch -n 2 nvidia-smiLook for:
- GPU Utilization: sustained load during Smart Search and video processing.
- Memory Usage: CLIP and other models typically occupy hundreds of MiB to a few GiB.
- Processes: a
pythonprocess associated with the ML container when Immich is indexing.
3. Practical ways to speed up Immich uploads
3.1 Prefer LAN over WAN whenever possible
Even with fast fiber, uploading over local network is almost always faster and more stable than coming from the internet:
- Lower latency: fewer hops, less jitter, and reduced packet overhead.
- No external throttling: no ISP or upstream shaping.
- More predictable I/O: simpler network path to the Immich host.
When planning large imports, perform them from a machine connected directly to your LAN, ideally via wired Ethernet.
3.2 Use the Immich CLI uploader for large batches
For very large imports, the Immich CLI is more robust than the browser:
# Example: uploading a directory via Immich CLI
immich-cli upload /path/to/media \
--server https://your-immich-domain \
--api-key YOUR_API_KEYThe CLI avoids browser upload limits, handles larger batches more gracefully, and provides clear terminal feedback on progress and failures.
3.3 Tune Immich worker concurrency
Immich exposes environment variables that control how many concurrent jobs various workers can process. These may include variables such as:
- Upload concurrency: how many uploads can be processed at once.
- Thumbnail and video workers: how many background tasks run in parallel.
- Machine learning workers: how many Smart Search jobs can use the GPU concurrently.
On strong hardware, increasing concurrency can reduce overall processing time, at the cost of higher peak resource usage. Changes should be introduced gradually while monitoring CPU, RAM, disk I/O, and GPU load.
3.4 Place Postgres on fast storage
Postgres performance has a direct impact on upload throughput. To optimize it:
- Use SSD or NVMe: avoid placing the database on slow spinning disks.
- Separate OS and database where possible: reduce contention with system and container I/O.
- Monitor I/O latency: track whether heavy uploads cause write stalls or high disk wait times.
For very large libraries, a dedicated NVMe volume for Postgres can make a noticeable difference.
3.5 Disable or relax video transcoding when not needed
If your devices can already play the original video formats, consider reducing or disabling automatic transcoding. This frees CPU and GPU resources, especially during large imports, and can make uploads feel more responsive.
3.6 Staged imports for smoother experience
Instead of pushing an entire multi-gigabyte library in one go, consider staging the import:
- Phase 1: upload photos first to warm up the database and verify indexing behavior.
- Phase 2: upload videos in smaller batches, monitoring CPU, GPU, and disk I/O.
- Phase 3: fill in any gaps or missing folders once the system is stable under load.
This approach makes it easier to catch misconfiguration early and adjust tuning without risking the entire library.
4. Advanced pattern: server-side import via rsync
4.1 Concept: separate transfer from indexing
For very large collections, you can decouple file transfer from Immich’s upload pipeline:
- Step 1: transfer files directly to the server with a tool like
rsyncorscp. - Step 2: point Immich to that directory or use its import features to index the files locally.
This leverages raw file transfer speed (often much faster than upload+processing) while still letting Immich handle metadata, Smart Search, and thumbnails.
4.2 Example rsync command
# From your client machine to the Immich server
rsync -avhP /path/to/media/ user@immich-server:/path/to/immich-import/Once the files are on the server, use Immich’s import mechanisms as documented by the project to ingest those directories into the library.
5. Healthy expectations for Immich upload speed
Even with a fast fiber connection, Immich will rarely push uploads at the same speed you would see with a pure file-sharing tool. That is by design. The server is doing more than receiving bytes:
- Protecting data integrity: via hashing and careful writes.
- Structuring your library: via metadata and database indexing.
- Enabling Smart Search: via CPU and GPU-intensive machine learning.
- Preparing previews: via thumbnail and video processing.
The goal is not “max out the link,” but “ingest media reliably, richly, and efficiently without breaking the system”.
With a clean host, GPU acceleration, a tuned database, and a good upload strategy (LAN + CLI + staged imports), Immich can feel fast and responsive while still doing all of the heavy lifting that makes it more than just a file dump.
No comments found