0

I have successfully (I hope) trained and evaluated a model using the tf.Estimator where I reach a train/eval accuracy of around 83-85%. So now, I would like to test my model on a separate dataset using the predict() function call in the Estimator class. Preferably I would like to do this in a separate script.

I've at this which says that I need to export as a SavedModel, but is this really necessary? Looking at the documentation for the Estimator class, it seems like I can just pass the path to my checkpoint and graph files via the model_dir parameter. Has anyone any experience with this? When I run my model on the same dataset I used for validation, I do not obtain the same performance as during the validation phase... :-(

Olivier Moindrot
  • 27,908
  • 11
  • 92
  • 91
Neergaard
  • 454
  • 4
  • 16

1 Answers1

1

I think you just need a separate file containing your model_fn definition. Than you instantiate the same estimator class in another script, using the same model_fn definition and the same model_dir.

That works because the Estimator API recovers the tf.Graph definitions and the latest model.ckpt files by itself so you are able to continue training, evaluation and prediction.

J.E.K
  • 1,321
  • 10
  • 17
  • I found out that the reason my predict() function didn't work as expected was because when I wrote the script for gathering data in TFRecords format, I hadn't noticed that os.listdir lists directory elements in random order, so the predicted labels didn't match up with the ground truth at all. Your comment is correct. – Neergaard Jan 20 '18 at 00:48