I am working with Tensorflow 1.4.
I created a custom tf.estimator in order to do classification, like this:
def model_fn():
# Some operations here
[...]
return tf.estimator.EstimatorSpec(mode=mode,
predictions={"Preds": predictions},
loss=cost,
train_op=loss,
eval_metric_ops=eval_metric_ops,
training_hooks=[summary_hook])
my_estimator = tf.estimator.Estimator(model_fn=model_fn,
params=model_params,
model_dir='/my/directory')
I can train it easily:
input_fn = create_train_input_fn(path=train_files)
my_estimator.train(input_fn=input_fn)
where input_fn is a function that reads data from tfrecords files, with the tf.data.Dataset API.
As I am reading from tfrecords files, I don't have labels in memory when I am making predictions.
My question is, how can I have predictions AND labels returned, either by the predict() method or the evaluate() method?
It seems there is no way to have both. predict() does not have access (?) to labels, and it is not possible to access the predictions dictionary with the evaluate() method.