Remote Machine Learning
To alleviate performance issues on low-memory systems like the Raspberry Pi, you may also host Immich's machine-learning container on a more powerful system (e.g. your laptop or desktop computer):
- Set the URL in Machine Learning Settings on the Admin Settings page to point to the designated ML system, e.g. http://workstation:3003.
- Copy the following docker-compose.ymlto your ML system.
- Start the container by running docker compose up -d.
info
Starting with version v1.93.0 face detection work and face recognize were split. From now on face detection is done in the immich_machine_learning container, but facial recognition is done in the microservices worker.
note
The hwaccel.ml.yml file also needs to be in the same folder if trying to use hardware acceleration.
name: immich_remote_ml
services:
  immich-machine-learning:
    container_name: immich_machine_learning
    # For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
    # Example tag: ${IMMICH_VERSION:-release}-cuda
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
    # extends:
    #   file: hwaccel.ml.yml
    #   service: # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
    volumes:
      - model-cache:/cache
    restart: always
    ports:
      - 3003:3003
volumes:
  model-cache:
Please note that version mismatches between both hosts may cause instabilities and bugs, so make sure to always perform updates together.