2

I'm loading a V2 checkpoint with the TensorFlow 1.4 C++ API, which is fairly straightforward following this answer: https://stackoverflow.com/a/43639305/9015277 . However, this answer does not specify the how the inputs can be fed to the loaded network.

In TF 1.4 inputs for ClientSession::Run() can be specified using a FeedType object, which is defined as:

std::unordered_map< Output, Input::Initializer, OutputHash > FeedType

Here each Output key represents a tensor value produced by an OP. With a graph built in the C++ API I guess it is fairly simple to just pass the input placeholder, but how can I do the same with a graph loaded from a V2 checkpoint?

In this example (which using the r0.12 api I believe) https://github.com/tensorflow/tensorflow/blob/ab0fcaceda001825654424bf18e8a8e0f8d39df2/tensorflow/examples/label_image/main.cc#L346 it is straightforward again, the layers are just given with their names. But how can I do the same with the new API?

mkisantal
  • 644
  • 6
  • 13

2 Answers2

2

Well, I was not getting any useful answers, so finally I ended up just using the old C++ API instead (which BTW still works in r1.4). I'm still looking for an answer how this should be done with the new API.

In the old TF API Session::Run is as follows:

virtual Status Run(
  const std::vector< std::pair< string, Tensor > > & inputs,
  const std::vector< string > & output_tensor_names,
  const std::vector< string > & target_node_names,
  std::vector< Tensor > *outputs
)=0

The string in the inputs vector allows for specifying inputs in the network with their names from the python graph definition, similar to how feed_dict is used on python. Here is the graph definition of my input placeholders in python:

with tf.variable_scope('global'):
    velocity_state = tf.placeholder(shape=[None, 1],
                                    dtype=tf.float32,
                                    name='velocity_state')

Feeding this particular placeholder in C++ with some dummy data, and running inference:

using namespace tensorflow;

// specifying input node name and creating tensor to feed it
string velocity_state_placeholder = "global/velocity_state";
Tensor velocity_state = Input::Initializer((float)0.0, TensorShape({1, 1})).tensor;

// collecting all inputs
std::vector<std::pair<string, Tensor>> input_feed;
input_feed.push_back(std::make_pair(velocity_state_placeholder, velocity_state));

// output node name
string action_distribution = "global/fully_connected_1/Softmax";

// tensor for results
std::vector<Tensor> output_tensors;

// running inference
Status run_status = session->Run(input_feed,
                                {action_distribution},
                                {},
                                &output_tensors);
mkisantal
  • 644
  • 6
  • 13
0

According to the TensorFlow API1.4 documentation, tensorflow::ClientSession::Run has the following signature :

Status Run (
  const FeedType & inputs,
  const std::vector< Output > & fetch_outputs,
  const std::vector< Operation > & run_outputs,
  std::vector< Tensor > *outputs
) const;

FeedType being a typedef over

std::unordered_map< Output, Input::Initializer, OutputHash > FeedType
YSC
  • 38,212
  • 9
  • 96
  • 149
  • 2
    Thank you for your answer @YSC, but it does not really address my question. My problem starts when I am trying to initialize the FeedType object for Run. The keys are tensorflow::Output objects, and I just don't know how to make these keys correspond to my input placeholders in the loaded model. – mkisantal Nov 27 '17 at 16:19