While running sagemaker in local mode.
I am experimenting with an inference endpoint in local mode using docker container. But as soon as my model.tar.gz file exceeds a certain size i.e. around 200 mb, the deployment fails and returns the error:
RuntimeError: Giving up, endpoint: didn't launch correctly
When I deploy it on a sagemaker instance, it works fine. Do you know if there is something I could do, perhaps some docker setting I could change to make sure that the local deployment also works with the larger model.tar.gz?