Is there any example of retraining a SavedModel? In many places they claim it is possible, instead of using checkpoints, but not examples provided. When I have tried to carried out, the variables of the model remain fixed:
...
model_save_path = "test.pb"
with tf.Session(graph=tf.Graph()) as net:
...
for e in range(epochs):
# Train the model
...
builder = saved_model.builder.SavedModelBuilder(model_save_path)
signature = predict_signature_def(inputs={'myInput': X, 'errorInput': Y},
outputs={'myOutput': out, 'errorOutput': mse})
builder.add_meta_graph_and_variables(sess=net,
tags=[tag_constants.TRAINING],
signature_def_map={'predict': signature})
builder.save()
print(error)
The code above trains the model, stores every interaction in a model and print the associated error. The code has an output where the error is improving:
2773.6885
291.35968
263.40912
255.27612
When we load it again, and we try to train it, the error stays the same:
...
# Load the model
model_save_path = "test.pb"
loaded = tf.saved_model.load(net, ["train"], model_save_path)
graph = tf.get_default_graph()
...
with tf.Session(graph=tf.Graph()) as net:
...
for e in range(epochs):
# Train the model
...
builder = saved_model.builder.SavedModelBuilder(model_save_path)
signature = predict_signature_def(inputs={'myInput': X, 'errorInput': Y},
outputs={'myOutput': out, 'errorOutput': mse})
builder.add_meta_graph_and_variables(sess=net,
tags=[tag_constants.TRAINING],
signature_def_map={'predict': signature})
builder.save()
print(error)
The output is always the error from the initial training:
255.27612
255.27612
255.27612
255.27612