Stealing the example code from here:
def custom_loss_wrapper(input_tensor):
def custom_loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
return custom_loss
input_tensor = Input(shape=(10,))
hidden = Dense(100, activation='relu')(input_tensor)
out = Dense(1, activation='sigmoid')(hidden)
model = Model(input_tensor, out)
model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')
X = np.random.rand(1000, 10)
y = np.random.rand(1000, 1)
model.train_on_batch(X, y)
This no longer works in recent Tensorflow versions. The main solution I have seen is to disable eager execution, but this will break other things I'm doing.
How can this type of functionality be maintained while staying in eager mode? That is, passing an input to the loss function? The best I can think of is to modify the network to concatenate the input to the output, then pull it apart in the loss. Very kludgy though.