I want to deploy a big model, e.g. bert, on spark to do inference since I don't have enough GPUs. Now I have two problems.
- I export the model to be pb format and load the model using the SavedModelBundle interface.
SavedModelBundle bundle=SavedModelBundle.load("E:\\pb\\1561992264","serve");
However, I can't find a way to load a pb model for hdfs filesystem path
- The spark environment's Glibc version isn't compatible with the tensorflow version I trained the model. Anyway to go around this?
I am not sure this is a good way to serving a tensorflow model on spark. Any other suggestions are appreciated!