35

I am using Keras with Tensorflow backend. My work involves comparing the performances of several models such as Inception, VGG, Resnet etc on my dataset. I would like to plot the training accuracies of several models in one graph. I am trying to do this in Tensorboard, but it is not working.

Is there a way of plotting multiple graphs in one plot using Tensorboard or is there some other way I can do this?

Thank you

Kavitha Devan
  • 361
  • 1
  • 3
  • 5
  • 1
    See the answers to [TensorBoard - Plot training and validation losses on the same graph?](https://stackoverflow.com/questions/37146614). – Ben Mares Mar 17 '19 at 21:09

4 Answers4

30

If you are using the SummaryWriter from tensorboardX or pytorch 1.2, you have a method called add_scalars:

Call it like this:

my_summary_writer.add_scalars(f'loss/check_info', {
    'score': score[iteration],
    'score_nf': score_nf[iteration],
}, iteration)

And it will show up like this:

tensorboard image


Be careful that add_scalars will mess with the organisation of your runs: it will add mutliple entries to this list (and thus create confusion):

tensorboard image

I would recommend that instead you just do:

my_summary_writer.add_scalar(f'check_info/score',    score[iter],    iter)
my_summary_writer.add_scalar(f'check_info/score_nf', score_nf[iter], iter)
Benjamin Crouzier
  • 40,265
  • 44
  • 171
  • 236
  • What does 'nf' stand for in your examples? – Bram Vanroy Sep 22 '19 at 12:19
  • 1
    @BramVanroy It's a trading model, nf means "no fee", a calculation of the score without taking fees into account. It's specific to my use case, you can name your scalars whatever you want (loss, performance, my_param...) – Benjamin Crouzier Sep 23 '19 at 16:10
  • 1
    @BenjaminCrouzier I'm trying your recommendation and it doesn't appear to be grouping scalar traces with the same prefix together into a single plot. Is there some trick to be aware of? – Rylan Schaeffer Jan 04 '20 at 22:10
  • 4
    Same, the plots are placed next to each other horizontally, but are still separate with different tags. – tangfucius Apr 02 '20 at 17:00
  • 1
    I started with that before coming here, but the recommended approach in this answer is not the solution to the question asked here. I also wanted to do this in PyTorch, while preserving the clean folder structure. I posted my answer here: https://stackoverflow.com/a/62203250/5235274 – meferne Jun 04 '20 at 20:15
  • Can someone please explain what's the reasoning behind "`add_scalars` [messing] with the organisation of your runs"? What do multiple runs have to do with multiple graphs in one? – Ram Rachum Jun 16 '22 at 20:30
  • Please explain the "instead" part. Doing this creates multiple figures instead of a single one with multiple plots. – Gulzar Aug 18 '22 at 06:54
22

Here is a way to have multiple graphs in one plot grouped into one single run, using add_custom_scalar on PyTorch.

What I get:

enter image description here

The corresponding complete running code:

from torch.utils.tensorboard import SummaryWriter
import math

layout = {
    "ABCDE": {
        "loss": ["Multiline", ["loss/train", "loss/validation"]],
        "accuracy": ["Multiline", ["accuracy/train", "accuracy/validation"]],
    },
}

writer = SummaryWriter()
writer.add_custom_scalars(layout)


epochs = 10
batch_size = 50

for epoch in range(epochs):
    for index in range(batch_size):
        global_batch_index = epoch * batch_size + index

        train_loss = math.exp(-0.01 * global_batch_index)
        train_accuracy = 1 - math.exp(-0.01 * global_batch_index)

        writer.add_scalar("loss/train", train_loss, global_batch_index)
        writer.add_scalar("accuracy/train", train_accuracy, global_batch_index)

    validation_loss = train_loss + 0.1
    validation_accuracy = train_accuracy - 0.1

    writer.add_scalar("loss/validation", validation_loss, global_batch_index)
    writer.add_scalar("accuracy/validation", validation_accuracy, global_batch_index)

 writer.close()

Please note the used tab on top left on the window is not SCALARS but CUSTOM SCALARS

Manu NALEPA
  • 1,356
  • 1
  • 14
  • 23
6
  • You can definitely plot scalars like the loss & validation accuracy : tf.summary.scalar("loss", cost) where cost is a tensor cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))

  • Now you write summary for plotting all the values and then you may want to merge all these summary to a single summary by: merged_summary_op = tf.summary.merge_all()

  • Next step will be to run this summary in the session by summary = sess.run(merged_summary_op)

  • After you run the merged_summary_op you have to write the summary using summary_writer : summary_writer.add_summary(summary, epoch_number) where summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())

  • Now open the terminal or cmd and run the following command: "Run the command tensorboard --logdir="logpath"

  • Then open http://0.0.0.0:6006/ into your web browser

  • You can refer the following link: https://github.com/jayshah19949596/Tensorboard-Visualization-Freezing-Graph

  • Other things you can plot are the weights, inputs

  • You can also display the images on tensorboard

  • I think if you are using keras with tensorflow 1.5 then using tensorboard is easy because in tensorflow 1.5 keras is included as their official high level api

  • I am sure you can plot different accuracy on the same graph for the same model with different hyper-paramters by using different FileWriter instances with different log path

  • Check the image below: enter image description here

  • I don't know if you can plot different accuracy of different models on the same graph... But you can write a program that does that

  • May be you can write the summary information of different models to different directories and then point tensor-board to the parent directory to plot the accuracy of different models on the same graph as suggested in the comment by @RobertLugg

==================== UPDATED =================

I have tried saving accuracy and loss of different models to different directories and then making tensorboard to point to the parent directory and it works, you will get results of different models in the same graph. I have tried this myself and it works.

Wang Xiaoming
  • 19
  • 1
  • 5
Jai
  • 3,211
  • 2
  • 17
  • 26
  • 6
    If I understand correctly, this answer doesn't explain how to place multiple values on a single chart. It may be necessary to write summary information to individual subdirectories and point TensorBoard to the parent directory – Robert Lugg Mar 06 '18 at 00:46
  • 1
    Yes you are right @RobertLugg ... I included that in my answer... I forgot to show how you can plot different summary in the same plot – Jai Mar 06 '18 at 08:58
  • Please post code as code. – Gulzar Aug 23 '22 at 20:29
4

Just save each runs in different folders under a main folder and open tensorboard on the main folder.

for i in range(x):
    tensorboard = TensorBoard(log_dir='./logs/' + 'run' + str(i), histogram_freq=0,
                                     write_graph=True, write_images=False)

    model.fit(X, Y, epochs=150, batch_size=10, callbacks=[tensorboard])

From the terminal, run tensorboard as such:

tensorboard --logdir=logs