I am running a Flask App with a systemml SVM component through pyspark. The app runs for about a day or so, then it begins to error out whenever the SVM is used to make a prediction. The error that is thrown is:
Caused by: java.io.FileNotFoundException: /tmp/systemml/_p1_10.101.38.73/cache/cache000005058.dat (No such file or directory)
I believe what is happening is that systemml is writing to /tmp/ which is then eventually cleared out by the container that I am using. Then when it goes to predict it attempts to read this file and errors out. Am I correct in that guess? What's the best way to solve this? Is there a way to tell systemml where to write its cache to?
Thanks for any help you can give!