I use two different models to to predict two different values. The prediction is computed by a CPU. My time is restrained in a way that one of the two predictions is too late when run sequentially. Is there a way to run two predictions parallel on a CPU?
I use Keras 2.3.1, Tensorflow 2.0.0 and Python 3.6.9.
My attempt so far (not as a functional code):
from tensorflow.keras.models import load_model
import concurrent.futures
model_one = load_model(path_to_model_one)
model_two = load_model(path_to_model_two)
with concurrent.futures.ProcessPoolExecutor() as executor:
f1 = executor.submit(model_one.predict, [input_values_one])
f2 = executor.submit(model_two.predict, [input_values_two])
prediction_one = f1.result()
prediction_two = f2.result()
I get this error:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "/usr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'tensorflow.python.keras.saving.saved_model.load.Model'>: attribute lookup Model on tensorflow.python.keras.saving.saved_model.load failed