8

can someone give me a hint on how I can load a model, trained and exported in python with keras, with the C++ API of tensorflow 2.0?

I can't find information about that, only with tensorflow version < 2.

Kind regards

Alessio
  • 3,404
  • 19
  • 35
  • 48
praetorianer777
  • 309
  • 3
  • 12

3 Answers3

5

Ok I found a solution nut with other problems:

In Python you have to export it with:

tf.keras.models.save_model(model, 'model')

In C++ you have to load it with:

tensorflow::SavedModelBundle model;
tensorflow::Status status = tensorflow::LoadSavedModel(
  tensorflow::SessionOptions(), 
  tensorflow::RunOptions(), 
  "path/to/model/folder", 
  {tensorflow::kSavedModelTagServe}, 
  &model);

Based on this post: Using Tensorflow checkpoint to restore model in C++

If I now try to set inputs and outputs it throws an error: "Could not find node with name 'outputlayer'" and "Invalid argument: Tensor input:0, specified in either feed_devices or fetch_devices was not in the Graph".

Does anybody has an idea whats wrong here?

praetorianer777
  • 309
  • 3
  • 12
2

Your initial idea was good. You need to use saved_model_cli tool from tensorflow. It will spit out something like this:

PS C:\model_dir> saved_model_cli show --dir . --all
       
[...]

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s): 
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s): 
    inputs['flatten_input'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2)
        name: serving_default_flatten_input:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['dense_2'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: StatefulPartitionedCall:0
  Method name is: tensorflow/serving/predict

You need to look for the names of those inputs and outputs that will be used. Here those are:

        name: serving_default_flatten_input:0

for the input, and

        name: StatefulPartitionedCall:0

for the output.

When you have those you may place them into your code

#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/cc/saved_model/tag_constants.h"
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/public/session_options.h"
#include "tensorflow/core/framework/logging.h" 

// ...

// We need to use SaveModelBundleLite as a in-memory model object for tensorflow's model bundle.
const auto savedModelBundle = std::make_unique<tensorflow::SavedModelBundleLite>();

// Create dummy options.
tensorflow::SessionOptions sessionOptions;
tensorflow::RunOptions runOptions;

// Load the model bundle.
const auto loadResult = tensorflow::LoadSavedModel(
        sessionOptions,
        runOptions,
        modelPath, //std::string containing path of the model bundle
        { tensorflow::kSavedModelTagServe },
        savedModelBundle.get());

// Check if loading was okay.
TF_CHECK_OK(loadResult);

// Provide input data.
tensorflow::Tensor tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({ 2 }));
tensor.vec<float>()(0) = 20.f;
tensor.vec<float>()(1) = 6000.f;

// Link the data with some tags so tensorflow know where to put those data entries.
std::vector<std::pair<std::string, tensorflow::Tensor>> feedInputs = { {"serving_default_flatten_input:0", tensor} };
std::vector<std::string> fetches = { "StatefulPartitionedCall:0" };

// We need to store the results somewhere.
std::vector<tensorflow::Tensor> outputs;

// Let's run the model...
auto status = savedModelBundle->GetSession()->Run(feedInputs, fetches, {}, &outputs);
TF_CHECK_OK(status);

// ... and print out it's predictions.
for (const auto& record : outputs) {
    LOG(INFO) << record.DebugString();
}

Running this will result in:

Directory ./model_bundle does contain a model.
2022-08-03 10:50:43.367619: I tensorflow/cc/saved_model/reader.cc:43] Reading SavedModel from: ./model_bundle 
2022-08-03 10:50:43.370764: I tensorflow/cc/saved_model/reader.cc:81] Reading meta graph with tags { serve }
2022-08-03 10:50:43.370862: I tensorflow/cc/saved_model/reader.cc:122] Reading SavedModel debug info (if present) from: ./model_bundle 
2022-08-03 10:50:43.371034: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-08-03 10:50:43.390553: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
2022-08-03 10:50:43.391459: I tensorflow/cc/saved_model/loader.cc:228] Restoring SavedModel bundle.
2022-08-03 10:50:43.426841: I tensorflow/cc/saved_model/loader.cc:212] Running initialization op on SavedModel bundle at path: ./model_bundle 
2022-08-03 10:50:43.433764: I tensorflow/cc/saved_model/loader.cc:301] SavedModel load for tags { serve }; Status: success: OK. Took 66144 microseconds.
2022-08-03 10:50:43.450891: I TensorflowPoC.cpp:46] Tensor<type: float shape: [1,1] values: [-1667.12402]>

TensorflowPoC.exe (process 21228) exited with code 0.
Adrian Mole
  • 49,934
  • 160
  • 51
  • 83
jdabros
  • 21
  • 2
0

You have to check the inputs and output names. Use tensorboard option for show the model structure. It is in Graph tab. Or some net viewer like Netron, etc.