2

Is it possible to accumulate tf.summary data for evaluation / test sets? (in a non-hacky way)

By non-hacky I mean something smarter than my current solution:

# init writer
writer = tf.summary.FileWriter(path, graph)

# build model
...

# add stuff
for v in tf.trainable_variables():
    tf.summary.histogram(v.name, v)
tf.summary.scalar("loss", loss)
...

# merge summary
merged = tf.summary.merge_all()

# during training --- everything fine, since we operate per mini-batch only
summary, _ = session.run([merged, optimizer_op],
                         feed_dict={X: train_batch, Y: train_batch_labels}

writer.add_summary(summary, train_step)


# test eval
# here it does get ugly, because we need to buffer every mini-batch
# in the whole test set in order to get accu, loss, ... for the whole
# test set and not only per mini-batch
for batch in test_data:
    loss, accuracy = session.run([lossop, accuop],
                                 feed_dict={X: batch, Y: batch_labels})
    buffer_loss.append(loss)
    buffer_accuracy.append(accuracy)

# this includes a new filewriter for test evaluation
# plus a new operation that calcs the mean over both buffers
# plus a new summary for the calculated means
# plus writing that data

While this works it is also a bad solution, because I have to re-create summary data just for the test set evaluation and use external buffers in Python itself to iterate over each mini-batch of the whole test set, just to pass the completed buffers back to TensorFlow to finally get a mean over all test batches.

daniel451
  • 10,626
  • 19
  • 67
  • 125

0 Answers0