1

We have a situation where we have a heavy CoreML model (170MB~) that we want to include in our iOS app.

Since we don't want the app size to be that large, we created a smaller model (that has lesser performance) that we can include directly and our intention is the download the heavy model upon app start and switch between the two when the heavy model is downloaded.

Our initial thought was to go to Apple's CoreML Model Deployment solution but it quickly turned out to be impossible for us as Apple requires MLModel archives to be up to 50MB.
So the question is, is there an alternative solution to loading a CoreML model from a remote source, similar to Apple's solution, and how would one implement it?

Any help would be appreciated. Thanks!

Roi Mulia
  • 5,626
  • 11
  • 54
  • 105
Eilon
  • 2,698
  • 3
  • 16
  • 32

1 Answers1

0

Put the mlmodel file on a server you own, download it into the app's Documents folder using your favorite method, create a URL to the downloaded file, use MLModel.compileModel(:at) to compile it, initialize the MLModel (or the automatically generated class) using the compiled model.

Matthijs Hollemans
  • 7,706
  • 2
  • 16
  • 23
  • Hey Matthijs! On the same note, using this preloading mechanism, I'm getting a weird memory spike when loading an MLModel without even calling predict. Simply by init it (2-3GB RAM used without doing anything). Model weights less than <1MB. Could it be connected to the layers/classes the model stores? It’s crashing my iPhone 12 Pro due to a memory issue. I posted a new SOF here: https://stackoverflow.com/questions/67968988/mlmodel-crash-app-on-init-due-to-memory-issue Thank you for all the help, couldn’t be here without it! – Roi Mulia Jun 14 '21 at 14:54