1

TensorFlow version = 2.0.0

I am following the example of how to use the TensorFlow summary module at https://www.tensorflow.org/api_docs/python/tf/summary; the first one on the page, which for completeness I will paste below:

writer = tf.summary.create_file_writer("/tmp/mylogs")
with writer.as_default():
  for step in range(100):
    # other model code would go here
    tf.summary.scalar("my_metric", 0.5, step=step)
    writer.flush()

Running this is fine, and I get event logs that I can view in TensorBoard. Great! However when I look in the event log using:

tensorboard --inspect --logdir=tmp/mylogs

it tells me that my summary variable has been written to the log as a Tensor for some reason, not a Scalar:

Event statistics for tmp/mylogs:
audio -
graph -
histograms -
images -
scalars -
sessionlog:checkpoint -
sessionlog:start -
sessionlog:stop -
tensor
   first_step           0
   last_step            99
   max_step             99
   min_step             0
   num_steps            100
   outoforder_steps     [(99, 0)]

I guess that might not be a problem, except that when I try to read from the event log following the method in e.g. https://stackoverflow.com/a/45899735/1447953:

from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
x = EventAccumulator(path="tmp/mylogs")
x.Reload()
print(x.Tags())

then it again tells me that my_metric is a Tensor:

{'images': [], 'audio': [], 'histograms': [], 'scalars': [], 'distributions': [], 'tensors': ['my_metric'], 'graph': False, 'meta_graph': False, 'run_metadata': []}

and when I try to look at the data it is gibberish

w_times, step_nums, vals = zip(*x.Tensors('my_metric'))
print("vals:", vals)

vals: (dtype: DT_FLOAT
tensor_shape {
}
tensor_content: "\000\000\000?"
, dtype: DT_FLOAT
tensor_shape {
}
tensor_content: "\000\000\000?"
, dtype: DT_FLOAT
tensor_shape {
}
...
etc.            

Am I doing something wrong here? The example seemed pretty simple so I'm not sure what the problem could be. I just copy/pasted it. Or maybe they decided to always stick data under the 'Tensor' tags and there is some way to convert the values back to something usable in standard plotting tools?

Edit: Ok right at the bottom of the migration doc https://www.tensorflow.org/tensorboard/migrate it says:

The event file binary representation has changed:

  • TensorBoard 1.x already supports the new format; this difference only affects users who are manually parsing summary data from event files

  • Summary data is now stored as tensor bytes; you can use tf.make_ndarray(event.summary.value[0].tensor) to convert it to numpy

So I guess that means the storage as 'tensor' is normal. The conversion is still mysterious to me though, they seem to be referring to a different interface than the EventAccumulator one I found. And it also seems that I only get 10 out 100 events recorded for some reason, which I also find mysterious.

Ben Farmer
  • 2,387
  • 1
  • 25
  • 44

3 Answers3

1

I had the same issue and I was able to load all the data using tf.compat.v1.train.summary_iterator().

import os
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd

path = "logs"
listOutput = os.listdir(path)

listDF = []
key = "loss". # tag

for tb_output_folder in listOutput:
    print(tb_output_folder)
    folder_path = os.path.join(path, tb_output_folder)
    file = os.listdir(folder_path)[0]

    tensors = []
    steps = []
    for e in tf.compat.v1.train.summary_iterator(os.path.join(folder_path, file)):
        for v in e.summary.value:
            if v.tag == key:
                tensors.append(v.tensor)
                steps.append(e.step)

    values = [tf.make_ndarray(t) for t in tensors]

    plt.plot(steps, values)

    df = pd.DataFrame(data=values)
    df.to_csv("{}.csv".format(tb_output_folder))

plt.show()
Anita
  • 11
  • 1
0

Well I still don't know why writing scalar summary data gets me tensor events, but I found out how to decode them at least. The following is based on the answer at https://stackoverflow.com/a/55788491/1447953, updated slightly for TensorFlow 2:

import tensorflow as tf
def decode(val):
    tensor_bytes = val.tensor_content
    tensor_dtype = val.dtype
    tensor_shape = [x.size for x in val.tensor_shape.dim]
    tensor_array = tf.io.decode_raw(tensor_bytes, tensor_dtype)
    tensor_array = tf.reshape(tensor_array, tensor_shape)
    return tensor_array

print([decode(v) for v in vals])
print([decode(v).numpy() for v in vals])

Output:

[<tf.Tensor: id=3, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=7, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=11, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=15, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=19, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=23, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=27, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=31, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=35, shape=(), dtype=float32, numpy=0.5>, <tf.Tensor: id=39, shape=(), dtype=float32, numpy=0.5>]
[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]

This is still not the complete story though, because this only gets me 10 events, whereas I expected that 100 should have been recorded. But I guess this is an issue of how the original recording occurs, because the step_nums I get are:

(3, 20, 24, 32, 53, 41, 58, 70, 78, 99)

so I guess only those iterations were written to disk. But why? I didn't see anything in the docs about selective writing occurring automatically.

Ben Farmer
  • 2,387
  • 1
  • 25
  • 44
0

I also checked the values for each step, because they were not recorded. I found a fatal miss.

x = EventAccumulator( path="tmp/mylogs", size_guidance={"tensors": 0}

As you can see in the code above, you can get the values for 100 steps by giving the argument size_guidance={"tensors": 0}.

I've attached the article I was referring to below.

web: How to read data from tensorflow 2 summary writer

jarvan
  • 1