I have a 2 branch network where one branch outputs regression value and another branch outputs classification label.
model = Model(inputs=inputs, outputs=[output1, output2])
model.compile(loss=[my_loss_reg, my_loss_class], optimizer='adam')
I want to implement a custom loss function (my_loss_reg()) for the regression branch such that at the regression end I want to add a fraction of the classification loss as follows,
def my_loss_reg(y_true, y_pred):
loss_mse=K.mean(K.sum(K.square(y_true-y_pred)))
#loss_reg = calculate_classification_loss() # How to implement this?
final_loss = some_function(loss_mse, loss_reg) # Can calculate only if loss_reg is available
return final_loss
The y_true
and y_pred
are true and predicted regression values at the regression branch. To calculate the classifcation loss I need the true and predicted classifcation labels, which is not available in my_loss_reg()
.
My question is how to calculate or access the classifcation loss at the regression end of the network? Similarly, I want to get the regression loss at the classification end while calulating the custom loss function my_loss_class()
for the classification.
How can I do that? Any code snippets will be helpful. I found this solution but this is no longer valid with the latest version of Tensorflow and Keras.