22

I use custom model for classification in Tensor flow Camera Demo. I generated a .pb file (serialized protobuf file) and I could display the huge graph it contains. To convert this graph to a optimized graph, as given in [https://www.oreilly.com/learning/tensorflow-on-android], the following procedure could be used:

$ bazel-bin/tensorflow/python/tools/optimize_for_inference  \
--input=tf_files/retrained_graph.pb \
--output=tensorflow/examples/android/assets/retrained_graph.pb
--input_names=Mul \
--output_names=final_result

Here how to find the input_names and output_names from the graph display. When I dont use proper names, I get device crash:

E/TensorFlowInferenceInterface(16821): Failed to run TensorFlow inference 
with inputs:[AvgPool], outputs:[predictions]

E/AndroidRuntime(16821): FATAL EXCEPTION: inference

E/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible 
shapes: [1,224,224,3] vs. [32,1,1,2048]

E/AndroidRuntime(16821):     [[Node: dropout/dropout/mul = Mul[T=DT_FLOAT, 
_device="/job:localhost/replica:0/task:0/cpu:0"](dropout/dropout/div, 
dropout/dropout/Floor)]]
tigertang
  • 445
  • 1
  • 6
  • 18
Santle Camilus
  • 945
  • 2
  • 12
  • 20
  • Hi @Dr.SantleCamilus , Did you got the solution? – Uma Achanta Aug 28 '17 at 06:53
  • 1
    yes, mention of proper input and output node names are essential for the android TF demo to work. Some older TF training code may not include these names to the model. Presence of node names could be found by below answer by JP Kim. If no names are present, it is needed to migrate to new TF training code to include proper node names. – Santle Camilus Aug 28 '17 at 09:55
  • I am getting the output like this *[u'image_tensor=>Placeholder'] * – Uma Achanta Aug 28 '17 at 10:26
  • can you please help me what does it mean? – Uma Achanta Aug 28 '17 at 10:27
  • 1
    [u'image_tensor=>Placeholder'] means that your input node name is ''image_tensor" ( / you can use --input_names=image_tensor while defining optimize_for_interface ) – Santle Camilus Aug 29 '17 at 08:50
  • @Dr.Santle-camilus - what is the output name? It is showing error as node doesn't exist with name "output". as I kept output as "output_name". Please help – Uma Achanta Aug 30 '17 at 04:59
  • 1
    Please check for presence of softmax node in your model using the below answer by JP Kim. If it returns any, please use the same name for output name. Output name is the specific node which generate the output of the CNN network. – Santle Camilus Aug 30 '17 at 06:13

3 Answers3

23

Try this:

run python

>>> import tensorflow as tf
>>> gf = tf.GraphDef()
>>> gf.ParseFromString(open('/your/path/to/graphname.pb','rb').read())

and then

>>> [n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')]

Then, you can get result similar to this:

['Mul=>Placeholder', 'final_result=>Softmax']

But I'm not sure it's the problem of node names regarding the error messages. I guess you provided wrong arguements when loading the graph file or your generated graph file is something wrong?

Check this part:

E/AndroidRuntime(16821): java.lang.IllegalArgumentException: Incompatible 
shapes: [1,224,224,3] vs. [32,1,1,2048]

UPDATE: Sorry, if you're using (re)trained graph , then try this:

[n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Mul')]

It seems that (re)trained graph saves input/output op name as "Mul" and "Softmax", while optimized and/or quantized graph saves them as "Placeholder" and "Softmax".

BTW, using retrained graph in mobile environment is not recommended according to Peter Warden's post: https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/ . It's better to use quantized or memmapped graph due to performance and file size issue, I couldn't find out how to load memmapped graph in android though...:( (no problem loading optimized / quantized graph in android)

JP Kim
  • 743
  • 8
  • 26
  • 1
    When I execute the comment for my custom model: [n.name + '=>' + n.op for n in input_graph_def.node if n.op in ( 'Softmax','Placeholder')], I get [u'tower_0/logits/predictions=>Softmax'], The output layer name is displayed while the input layer name is not present. I cant understand where things go wrong. – Santle Camilus Apr 24 '17 at 07:56
  • @Dr.SantleCamilus , I think the reason you get error while loading the graph file is you tried to load a graph not optimized for mobile. You should not use pb file from retrained output. It has Djpeg issue on mobile. So just convert it using optimized_for_inference and/or quantize_graph. Both are fine but quantized graph is better. – JP Kim Apr 25 '17 at 01:33
  • output of [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')] after optimized_for_inference or quantize_graph or transform_graph operation is [u'tower_0/logits/predictions=>Softmax']. – Santle Camilus Apr 25 '17 at 14:33
  • output of [n.name + '=>' + n.op for n in gf.node if n.op in ( 'Softmax','Mul')] after optimized_for_inference or quantize_graph [u'tower_0/conv0/BatchNorm/moments/normalize/shifted_mean=>Mul', u'tower_0/conv0/BatchNorm/moments/normalize/Mul=>Mul', ........... u'tower_0/mixed_8x8x2048b/branch_pool/Conv/BatchNorm/batchnorm/mul=>Mul', u'tower_0/mixed_8x8x2048b/branch_pool/Conv/BatchNorm/batchnorm/mul_1=>Mul', u'tower_0/logits/dropout/dropout/random_uniform/mul=>Mul', u'tower_0/logits/dropout/dropout/mul=>Mul', u'tower_0/logits/predictions=>Softmax'] – Santle Camilus Apr 25 '17 at 14:36
  • The history goes here: The tensorflow models are created using the inception V3 arch.: https://github.com/tensorflow/models/tree/master/inception The models are saved in check point (ckpt) format (.meta, .index and .data). The model is converted into a .pb file to port to the tensor flow camera demo (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/README.md) – Santle Camilus Apr 25 '17 at 14:37
  • Hmm..weird. Could you double check if following processes are what you'd done? : 1) created checkpoint files using imagenet training 2) freeze them to the protobuf (.pb) using freeze_graph 3) optimize the frozen graph using optimize_for_inference (this needs input/output node name, try with Mul & final_result - you can use pb file in android from here) 4) optional: quantize the optimized graph using quantize_graph – JP Kim Apr 26 '17 at 02:21
  • Yes. I follow the same processes. I raised the issue in the corresponding github forum: https://github.com/tensorflow/models/issues/1420 More details are presented in it. – Santle Camilus Apr 27 '17 at 10:28
  • @Dr.SantleCamilus , Please refer here: http://stackoverflow.com/a/43662693/4571192 I guess it can help you. – JP Kim May 03 '17 at 07:28
  • I am getting the output like this *[u'image_tensor=>Placeholder'] * – Uma Achanta Aug 28 '17 at 10:32
  • I received this error: AttributeError: module 'tensorflow' has no attribute 'GraphDef' – Admia Nov 01 '21 at 15:19
10

Recently I came across this option directly from tensorflow:

bazel build tensorflow/tools/graph_transforms:summarize_graph    
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
--in_graph=custom_graph_name.pb
Santle Camilus
  • 945
  • 2
  • 12
  • 20
8

I wrote a simple script to analyze the dependency relations in a computational graph (usually a DAG, directly acyclic graph). It's so obvious that the inputs are the nodes that lack a input. However, outputs can be defined as any nodes in a graph because, in the weirdest but still valid case, outputs can be inputs while the other nodes are all dummy. I still define the output operations as nodes without output in the code. You could neglect it at your willing.

import tensorflow as tf

def load_graph(frozen_graph_filename):
    with tf.io.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.compat.v1.GraphDef()
        graph_def.ParseFromString(f.read())
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def)
    return graph

def analyze_inputs_outputs(graph):
    ops = graph.get_operations()
    outputs_set = set(ops)
    inputs = []
    for op in ops:
        if len(op.inputs) == 0 and op.type != 'Const':
            inputs.append(op)
        else:
            for input_tensor in op.inputs:
                if input_tensor.op in outputs_set:
                    outputs_set.remove(input_tensor.op)
    outputs = list(outputs_set)
    return (inputs, outputs)
tigertang
  • 445
  • 1
  • 6
  • 18