I'm experiencing some issues with a multithreaded application that uses Keras with Tensorflow backend.
I have 2 threads:
- Thread A: Creates a model once and compiles it. Every 30 minutes it fetches a new version of the train data and trains the model once again. The it saves the model to an .h5 file
- Thread B: Using thread-locks (to make multithreading safe), this thread preprocesses its data and calls load_model on the aforementioned .h5 file. After that it calls a predict on the loaded model.
I'm puzzled, because I keep getting an Exception like the one below:
Cannot interpret feed_dict key as Tensor: Tensor Tensor("input_1:0", shape=(?, 60, 17), dtype=float32) is not an element of this graph
when load_model()
is called.
I have found this https://github.com/keras-team/keras/issues/6124 , but since on Thread A I'm not making any predictions, I'm calling _make_predict_function()
after I fit the model on Thread A and then saving the model.
Do you have any suggestions/tips?