2

I have a requirement where I need to Deploy a Convolutional Neural Network Model in an Offline device. I know we can use Google Cloud ML to train a model, tune the hyper-hyperparameters and deploy it for prediction.

But my question is if we can download the trained TensorFlow model and deploy it on a custom device for prediction?

Note - The Custom Device will have a lot of processing power but no internet connectivity.

1 Answers1

3

Yes. The training service and the prediction services are completely separate. To train a model for a custom device, you create a TensorFlow script to train the model. The script will typically describe one TensorFlow graph for training and a second for prediction (the prediction graph will be constructed in such a way that it can load the parameters learned during training). The prediction graph will be tailored for your custom hardware.

Be sure to include commands to export the prediction graph to GCS. For an example of how to export a model, see this post.

rhaertel80
  • 8,254
  • 1
  • 31
  • 47