1 - Easy, best - maybe not good for memory
Why not just get the expected items for the loss already?
new_y_train = non_trainable_ops_model.predict(original_y_train)
nn_model.fit(x_train, new_y_train)
This sounds definitely the best way if your memory can handle this. Simpler model, faster training.
You can even save/load the new data:
np.save(name, new_y_train)
new_y_train = np.load(name)
2 - Make the model output the loss and use a dummy loss for compiling
Losses:
def dummy_loss(true, pred):
return pred
def true_loss(x):
true, pred = x
return loss_function(true, pred) #you can probably from keras.losses import loss_function
Model:
#given
nn_model = create_nn_model()
non_trainable_ops_model = create_nto_model()
nn_input = Input(nn_input_shape)
nto_input = Input(nto_input_shape)
nn_outputs = nn_model(nn_input)
nto_outputs = non_trainable_ops_model(nto_input)
loss = Lambda(true_loss)([nto_outputs, nn_outputs])
training_model = Model([nn_input, nto_input], loss)
training_model.compile(loss = dummy_loss, ...)
training_model.fit([nn_x_train, nto_x_train], np.zeros((len(nn_x_train),)))
3 - Use model.add_loss
instead of compiling a loss
Following the same as the previous answer, you can:
training_model = Model([nn_input, nto_input], nn_outputs)
loss = true_loss([nto_outputs, nn_outputs])
training_model.add_loss(loss)
training_model.compile(loss=None, ...)
training_model.fit([nn_x_train, nto_x_train], None)
4 - Enable eager execution and make custom training loops
https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough