Use Case:
I have multiple models each with a slightly different architecture. Model A with one output and Model B with two outputs (call them output_1
and output_2
). output_1
of Model B corresponds to the same information as the single output of Model A. I would like plot the accuracies/losses from output_1
of Model B together with those of Model A on the same graph in Tensorboard. I'm using the functional layer protocols for Models A/B and hence given the two outputs of Model B names: output_1
, output_2
while giving the output of Model A: output_1
.
I've noticed Keras/Tensorboard will report the accuracies/losses for Model B as output_1_acc, output_2_acc, output_1_loss, output_2_loss, val_output_1_acc, val_output_2_acc, val_output_1_loss, val_output_2_loss
(with maybe a loss
metric name added in that corresponds to the addition of losses from outputs 1/2). However, I've noticed Keras/Tensorboard will report the accuracies/losses under Model A (if compiled with metric: accuracy
) as acc, loss, val_acc, val_loss
. So the accuracies for output_1
of Model A/B aren't plotted on the same graph in Tensorboard. Leaving me to have to download csv's for each and then plot them together manually.
So I'm looking for a clean way to simply rename the metrics (accuracies/losses) for the single output in Model A (output_1
) so that Tensorboard will report them on the same graphs as the accuracies/losses for output_1 of Model B.
Googling around has given me a method, but it's a bit cumbersome. Essentially, I have to define custom metrics for Model A with functional names equal to the accuracy/loss names reported for Model B (explicitly referenced in second paragraph above). And then I compile these custom metrics with the model. So a brief example of what I'm doing:
Class Model_A():
def edge_output_acc(self, y_true=None, y_pred=None):
# effectively hardcodes the binary accuracy as form of accuracy
return K.mean(K.equal(y_true, K.round(y_pred)))
def edge_output_loss(self, y_true=None, y_pred=None):
# effectively hardcodes the binary crossentropy as form of loss
return K.binary_crossentropy(y_true, y_pred)
#now compile with something like this
def new_compile(self, loss, sgd):
# loss param could get out of sync with the hardcoded loss unless I recreate a lot of Kera's internal switching
model.compile(loss=loss, optimizer=sgd, metrics=[self.edge_output_acc, self.edge_output_loss])
#...instead of compiling with something like this
def old_compile(self, loss, sgd):
model.compile(loss=loss, optimizer=sgd, metrics=["accuracy"])
This is fine, except that it forces me to hardcode loss/accuracy type used for Model A (so that I can define these custom metrics). I would like to provide these as parameters to my model definition script so that modeling is more customizable. But as I see it this means essentially I have to reproduce Keras' source switching behavior in links Keras training.py/Keras metrics.py from below, which obviously is a headache and redundant.
Is there a cleaner way?
Thanks!
For reference, these links describe the approach I'm currently using and that I describe above, in more detail:
How to use ModelCheckpoint with custom metrics in Keras?