I have a trained keras model, and I am trying to run predictions with CPU only. I want this to be as quick as possible, so I thought I would use predict_generator
with multiple workers. All of the data for my prediction tensor are loaded into memory beforehand. Just for reference, array is a list of tensors, with the first tensor having shape [nsamples, x, y, nchannels]. I made a thread-safe generator following the instructions here (I followed this when using fit_generator
as well).
class DataGeneratorPredict(keras.utils.Sequence):
'Generates data for Keras'
def __init__(self, array, batch_size=128):
'Initialization'
self.array = array
self.nsamples = array[0].shape[0]
self.batch_size = batch_size
self.ninputs = len(array)
self.indexes = np.arange(self.nsamples)
def __len__(self):
'Denotes the number of batches'
print('nbatches:',int(np.floor(self.nsamples / self.batch_size)))
return int(np.floor(self.nsamples / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
print(index)
inds = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# Generate data
X = []
for inp in range(self.ninputs):
X.append(self.array[inp][inds])
return X
I run predictions with my model like so,
#all_test_in is my list of input data tensors
gen = DataGeneratorPredict(all_test_in, batch_size=1024)
new_preds = conv_model.predict_generator(gen,workers=4,use_multiprocessing=True)
but I don't get any speed improvement over using conv_model.predict
, regardless of the number of workers. This seemed to work well when fitting my model (i.e., getting a speed-up using a generator with multiple workers). Am I missing something in my generator? Is there a more efficient way to optimize predictions (besides using GPU)?