Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you’re not blowing up the dynamic volume upon restart.

In my case I changed this:

  immich-machine-learning:
    ...
    volumes:
      - model-cache:/cache

To that:

  immich-machine-learning:
    ...
    volumes:
      - ./cache:/cache

I no longer have to wait uncomfortably long when I’m trying to show off Smart Search to a friend, or just need a meme pronto.

That’ll be all.

    • Avid Amoeba@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Let me know how inference goes. I might recommend that to a friend with a similar CPU.

      • Showroom7561@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I decided on the ViT-B-16-SigLIP2__webli model, so switched to that last night. I also needed to update my server to the latest version of Immich, so a new smart search job was run late last night.

        Out of 140,000+ photos/videos, it’s down to 104,000 and I have it set to 6 concurrent tasks.

        I don’t mind it processing for 24h. I believe when I first set immich up, the smart search took many days. I’m still able to use the app and website to navigate and search without any delays.

        • Avid Amoeba@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Let me know how the search performs once it’s done. Speed of search, subjective quality, etc.

          • Showroom7561@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 hours ago

            OK, indexing finished some time yesterday and I ran a few searches like:

            “Child wearing glasses indoors”

            “Cars with no wheels”

            “Woman riding a bike”

            Results come up (immich on android) in three seconds.

            But the quality of the results do appear to be considerably better with ViT-B-16-SigLIP2__webli compared to the default model.

            I’m pretty happy. 👍

            • Avid Amoeba@lemmy.caOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              7 hours ago

              Nice. So this model is perfectly usable by lower end x86 machines.

              I discovered that the Android app shows results a bit slower than the web. The request doesn’t reach Immich during the majority of the wait. I’m not sure why. When searching from the web app, the request is received by Immich immediately.

              • Showroom7561@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 hours ago

                Interesting, it’s slightly slower for me through the web interface both with a direct connect to my network, or when proxied through the internet. Still, we’re talking seconds here, and the results are so accurate!

                Immich has effectively replaced the (expensive) Windows software Excire Foto, which I was using for on-device contextual search because Synology Photos search just sucks. Excire isn’t ideal to run from Linux because it has to be done through a VM, so I’m happy to self-host Immich and be able to use it even while out of the house.

          • Showroom7561@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Search speed was never an issue before, and neither was quality. My biggest gripe is not being able to sort search by date! If I had that, it would be perfect.

            But I’ll update you once it’s done (at 97,000 to go… )