I'm using Core ML models for image style transfer. An initialized model takes ~60 MB memory on an iPhone X in iOS 12. However, the same model loaded on an iPhone Xs (Max) consumers more then 700 MB of RAM.
In instruments I can see that the runtime allocates 38 IOSurfaces with up to 54 MB memory foodprint each alongside numerious other Core ML (Espresso) related objects. Those are not there on the iPhone X.
My guess is that the Core ML runtime does something different in order to utilize the power of the A12. However, my app crashes due to the memory pressure.
I already tried to convert my models again with the newest version of coremltools
. However, they are identical.
Did I miss something?