1

Simply, Imagine I have 100 trained Neural Networks, And a Test image( or Samples, tensor, vector etc.) I want to calculate output of all my trained nets on test sample, One simple way is to use for loop, and calculate network responses one by one. Like so :

def show_reconstructions(model,X_test):
    reconstructions = model.predict(X_test)
    return reconstructions

for i in range(100):
    Rec                 = show_reconstructions(Nets[i],test_samples)
    Diff                = Rec - test_samples
    NormDiff.append     = np.linalg.norm(Diff)

I assign all my net object in a list named Nets, and finally I need the NormDiff variable which is a vector of norm of difference between test samples and net output (something like error, make sense) whose size is 100. My question is, simply: How Can I Remove The "for" Loop and calculate Whole of Network Outputs to Test Input, All At Once?? In order to save time and compute the calculations in shorter time(real time application is assumed). This is obvious that output of one network is completely independent of the other net outputs, so this task seems to be done in a parallel fashion.(?) But I am a new python coder and don't know how can do that. I try to do that with numba library, but it don't accept network object as input argument as list or dict.

AloneTogether
  • 25,814
  • 5
  • 20
  • 39
Mohammad
  • 11
  • 1

1 Answers1

0

I have 2 suggestions :

  1. Parallelize the for loop to speed up the process How do I parallelize a simple Python loop?
  2. Create a giant model that is composed of the 100 models and you can predict only once.
def f(models):
    inputs = keras.Input(shape=(150, 150, 3))
    x = models[0](inputs, training=False)
    for model in models[1:]:
        res = model(inputs, training=False)
        x = tf.concat([x, res ])
    return tf.keras.Model(inputs=inputs, outputs=x)

model = f(models)
result = model.predict(input_)
Ghassen Sultana
  • 1,223
  • 7
  • 18
  • Hi Dear Ghassen Thank for your attention to my issue At the second solution in the gigantic model, it seems that there is an intrinsic for loop. so this solution may take some time equivalent to my first solution, just that you use concat function instead append. – Mohammad Oct 12 '21 at 12:29
  • Yes you are right, i edited the response. it's concat but in the last step we must return a model and when yout predict you will all the response of the models are concatenated, the differance between the 2 functions and is that the first will predict on the spot and second will create the model and the prediction in the end – Ghassen Sultana Oct 12 '21 at 13:09
  • Yes, it is. as the algorithm let us, it is possible to make a huge model outside and just use it in the algorithm. so it may be work. but I am challenging with make a huge model. do you have some other example in making huge model from multiple models? *thanks* – Mohammad Oct 13 '21 at 21:17
  • I have a warning, and an error : *WARNING:tensorflow:Model was constructed with shape (None, 74) for input KerasTensor(type_spec=TensorSpec(shape=(None, 74), dtype=tf.float32, name='sequential_input'), name='sequential_input', description="created by layer 'sequential_input'"), but it was called on an input with incompatible shape (None, 1, 74, 1).* – Mohammad Oct 13 '21 at 21:53
  • Error: *concat() missing 1 required positional argument: 'axis'* – Mohammad Oct 13 '21 at 21:54
  • I solved the warning by truly adjusting the shape, But still challenging with error, Unfortunately. :/ – Mohammad Oct 13 '21 at 21:58
  • you can choose either tf.concat([x, res ], axis=0) or tf.concat([x, res ],axis = 1) but you must make sure that the result will be reshaped accordingly after the predict – Ghassen Sultana Oct 13 '21 at 22:09
  • thanks, I solve the error too, but the execution time for this action is grater than the case at which for loop was used, surprisingly . What about GPU computing? Do you know about? **Regards** – Mohammad Oct 13 '21 at 22:16
  • tensorflow will directly execute the predict on the gpu if cuda is correctly installed. you can check https://www.tensorflow.org/install/gpu?hl=fr#software_requirements for the requirements – Ghassen Sultana Oct 13 '21 at 22:39