2

Use Case: I have multiple models each with a slightly different architecture. Model A with one output and Model B with two outputs (call them output_1 and output_2). output_1 of Model B corresponds to the same information as the single output of Model A. I would like plot the accuracies/losses from output_1 of Model B together with those of Model A on the same graph in Tensorboard. I'm using the functional layer protocols for Models A/B and hence given the two outputs of Model B names: output_1, output_2 while giving the output of Model A: output_1.

I've noticed Keras/Tensorboard will report the accuracies/losses for Model B as output_1_acc, output_2_acc, output_1_loss, output_2_loss, val_output_1_acc, val_output_2_acc, val_output_1_loss, val_output_2_loss (with maybe a loss metric name added in that corresponds to the addition of losses from outputs 1/2). However, I've noticed Keras/Tensorboard will report the accuracies/losses under Model A (if compiled with metric: accuracy) as acc, loss, val_acc, val_loss. So the accuracies for output_1 of Model A/B aren't plotted on the same graph in Tensorboard. Leaving me to have to download csv's for each and then plot them together manually.

So I'm looking for a clean way to simply rename the metrics (accuracies/losses) for the single output in Model A (output_1) so that Tensorboard will report them on the same graphs as the accuracies/losses for output_1 of Model B.

Googling around has given me a method, but it's a bit cumbersome. Essentially, I have to define custom metrics for Model A with functional names equal to the accuracy/loss names reported for Model B (explicitly referenced in second paragraph above). And then I compile these custom metrics with the model. So a brief example of what I'm doing:

Class Model_A():

    def edge_output_acc(self, y_true=None, y_pred=None):
        # effectively hardcodes the binary accuracy as form of accuracy
        return K.mean(K.equal(y_true, K.round(y_pred)))


    def edge_output_loss(self, y_true=None, y_pred=None):
        # effectively hardcodes the binary crossentropy as form of loss
        return K.binary_crossentropy(y_true, y_pred)

    #now compile with something like this
    def new_compile(self, loss, sgd):
       # loss param could get out of sync with the hardcoded loss unless I recreate a lot of Kera's internal switching
       model.compile(loss=loss, optimizer=sgd, metrics=[self.edge_output_acc, self.edge_output_loss])

    #...instead of compiling with something like this
    def old_compile(self, loss, sgd):
        model.compile(loss=loss, optimizer=sgd, metrics=["accuracy"])

This is fine, except that it forces me to hardcode loss/accuracy type used for Model A (so that I can define these custom metrics). I would like to provide these as parameters to my model definition script so that modeling is more customizable. But as I see it this means essentially I have to reproduce Keras' source switching behavior in links Keras training.py/Keras metrics.py from below, which obviously is a headache and redundant.

Is there a cleaner way?

Thanks!

For reference, these links describe the approach I'm currently using and that I describe above, in more detail:

How to use ModelCheckpoint with custom metrics in Keras?

How does keras define “accuracy” and “loss”?

Keras training.py

Keras metrics.py

Keras usage of metrics

Keras usage of losses

John Cast
  • 1,771
  • 3
  • 18
  • 40
  • So you want for Tensorboard to plot both losses, etc. on the same graph? How are you currently passing that to Tensorboard to plot it in separate graphs? – DarkCygnus Oct 28 '17 at 03:24
  • You should check out Losswise, it's similar to Tensorboard, it integrates with Keras super easily (https://docs.losswise.com/#keras-plugin) and makes it trivial to do things like viewing multiple graphs on top of each other. – nicodjimenez Oct 29 '17 at 03:44

1 Answers1

0

I'm not sur I understand well your situation.

For me you just have to make a correct tree directory structure with your log output to to do it and then launch Tensorboard from root folder.

Example: consider \ as root, if you log you loss data from Model A on a folder named model_A and loss from model B on folder named model_B

  • \
  • \model_A
    • \model_A\output1
  • \model_B
    • \model_B\output1
    • \model_B\output2

If you launch Tensorboard from \ you will get each loss on the same display.

Anyway, you can redefine behaviour of the default Tensorboard callback to suit your needs. Regards

Romain Cendre
  • 309
  • 2
  • 12