65

The graph object in Tensorflow has a method called "get_tensor_by_name(name)". Is there anyway to get a list of valid tensor names?

If not, does anyone know the valid names for the pretrained model inception-v3 from here? From their example, pool_3, is one valid tensor but a list of all of them would be nice. I looked at the paper referred to and some of the layers seems to correspond to the sizes in table 1 but not all of them.

John Pertoft
  • 1,045
  • 1
  • 9
  • 17
  • 2
    Just a small update to [*etarion's* answer](https://stackoverflow.com/a/35337827/6862058), `op.values()` is deprecated. Use `op.outputs` instead. – Deadly Piglet Apr 25 '18 at 19:58

6 Answers6

64

The paper is not accurately reflecting the model. If you download the source from arxiv it has an accurate model description as model.txt, and the names in there correlate strongly with the names in the released model.

To answer your first question, sess.graph.get_operations() gives you a list of operations. For an op, op.name gives you the name and op.values() gives you a list of tensors it produces (in the inception-v3 model, all tensor names are the op name with a ":0" appended to it, so pool_3:0 is the tensor produced by the final pooling op.)

mathetes
  • 11,766
  • 7
  • 25
  • 32
etarion
  • 16,935
  • 4
  • 43
  • 66
  • 1
    Thanks for the quick answer! There still seems to be some differences in the model.txt and the outputshape I'm seeing with this pretrained model. For example if I look at "pool:0" which I guess is the first pooling layer, I get the shape 73x73x64 but the input to the layer after it in model.txt is 73x73x80. Or am I misunderstanding something? – John Pertoft Feb 11 '16 at 11:52
  • @john I did not dig through the comments in the model.txt in depth, I think there are some inconsistencies there in the comments - I didn't find inconsistencies in the non-comments. For that pool layer, the previous convolution has 64 filter banks (second argument to Conv in the assignment to conv_2), so the pooling layer also has 64 channels. The 80 is the number of outputs of the next conv layer ... – etarion Feb 11 '16 at 12:35
  • @etarion where can I download this model.txt? Could you give me a direct link please? Thanks in advance – Oleksandr Khryplyvenko Mar 02 '16 at 19:33
  • 2
    @OleksandrKhryplyvenko it's part of the source distribution of http://arxiv.org/format/1512.00567v3 – etarion Mar 02 '16 at 19:57
  • @etarion thanks! Btw, I've built inception v3 graph. Excessive memory consumption during graph construction was caused by duplicate entries, as you'd told me. – Oleksandr Khryplyvenko Mar 03 '16 at 16:11
31

The above answers are correct. I came across an easy to understand / simple code for the above task. So sharing it here :-

import tensorflow as tf

def printTensors(pb_file):

    # read pb into graph_def
    with tf.gfile.GFile(pb_file, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # import graph_def
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def)

    # print operations
    for op in graph.get_operations():
        print(op.name)


printTensors("path-to-my-pbfile.pb")
Akash Goyal
  • 1,273
  • 10
  • 15
  • 1
    I get a `google.protobuf.message.DecodeError: Error parsing message` from the ParseFromString. But i'm working with a `saved_model`. Why are there so many formats for models in TF? It kinda sucks. – CpILL Nov 13 '19 at 21:16
25

To see the operations in the graph (You will see many, so to cut short I have given here only the first string).

sess = tf.Session()
op = sess.graph.get_operations()
[m.values() for m in op][1]

out:
(<tf.Tensor 'conv1/weights:0' shape=(4, 4, 3, 32) dtype=float32_ref>,)
Prakash Vanapalli
  • 677
  • 1
  • 9
  • 16
10

You do not even have to create a session to see the names of all operation names in the graph. To do this you just need to grab a default graph tf.get_default_graph() and extract all the operations: .get_operations. Each operation has many fields, the one you need is name.

Here is the code:

import tensorflow as tf
a = tf.Variable(5)
b = tf.Variable(6)
c = tf.Variable(7)
d = (a + b) * c

for i in tf.get_default_graph().get_operations():
    print i.name
David Parks
  • 30,789
  • 47
  • 185
  • 328
Salvador Dali
  • 214,103
  • 147
  • 703
  • 753
  • 1
    This works but for some reason the inception model (only model I'm tried it on) has a `:0` within most of its names and it's not relflected by the `i.name` code above. Why is that? – Moondra Oct 27 '17 at 17:11
  • AttributeError: module 'tensorflow' has no attribute 'get_default_graph' - is this code for tf2.x as well? – user3428154 Sep 09 '20 at 16:25
  • To get the graph in >= Tf 2.0, don't use tf.get_default_graph(). Instead, you need to instantiate a tf.function first and access the graph attribute as follows: graph = func.get_concrete_function().graph – Mwenda Jan 28 '21 at 22:38
2

As a nested list comprehension:

tensor_names = [t.name for op in tf.get_default_graph().get_operations() for t in op.values()]

Function to get names of Tensors in a graph (defaults to default graph):

def get_names(graph=tf.get_default_graph()):
    return [t.name for op in graph.get_operations() for t in op.values()]

Function to get Tensors in a graph (defaults to default graph):

def get_tensors(graph=tf.get_default_graph()):
    return [t for op in graph.get_operations() for t in op.values()]
eqzx
  • 5,323
  • 4
  • 37
  • 54
0

saved_model_cli is An alternative command line tool comes with TF that might be useful if your dealing with the "SavedModel" format. From the docs

!saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve --all

This output might be useful, something like:

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is: 

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['dense_input'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1280)
        name: serving_default_dense_input:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['dense_1'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: StatefulPartitionedCall:0
  Method name is: tensorflow/serving/predict
CpILL
  • 6,169
  • 5
  • 38
  • 37