14

AutoML seems great. One big question is that - can we export the trained model for offline inference, such as with tensorflow or tensoflow lite?

N8allan
  • 2,138
  • 19
  • 32
user4572254
  • 186
  • 1
  • 6

4 Answers4

5

This is not supported as of March 2019. If you are interested in this feature, star this request: https://issuetracker.google.com/issues/113122585

Also check that link in case Google has implemented the feature since this answer.

Update: initial support has been added for classification, but not yet detection. See Peter Gibson's answer.

N8allan
  • 2,138
  • 19
  • 32
3

EDIT: It's now posible to export both Image Classification and Object Detection Models. See https://cloud.google.com/vertex-ai/docs/export/export-edge-model#object-detection

Original Answer Follows

Current status (August 2019) for AutoML Vision is that you can export AutoML image classification models but not object detection. This feature is in beta (as is AutoML Vision itself). I couldn't find details for other AutoML products and haven't tried them myself, so I'm unsure of their status.

From https://cloud.google.com/vision/automl/docs/

AutoML Vision Edge now allows you to export your custom trained models.

  • AutoML Vision Edge allows you to train and deploy low-latency, high accuracy models optimized for edge devices.
  • With Tensorflow Lite, Core ML, and container export formats, AutoML Vision Edge supports a variety of devices.
  • Hardware architectures supported: Edge TPUs, ARM and NVIDIA.
  • To build an application on iOS or Android devices you can use AutoML Vision Edge in ML Kit. This solution is available via Firebase and offers an end-to-end development flow for creating and deploying custom models to mobile devices using ML Kit client libraries.

Documentation https://cloud.google.com/vision/automl/docs/edge-quickstart

I trained a classification model, exported the tflite model (it exports to Cloud storage), and was able to download the model files and load them into tensorflow using the Python API without too much hassle. Here's the relevant code for loading the model and running inference:

Based on https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python

# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=MODEL_PATH)
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

def predict(frame):
    interpreter.set_tensor(input_details[0]['index'], frame)
    interpreter.invoke()

    # The function `get_tensor()` returns a copy of the tensor data.
    # Use `tensor()` in order to get a pointer to the tensor.
    output_data = interpreter.get_tensor(output_details[0]['index'])
Peter Gibson
  • 19,086
  • 7
  • 60
  • 64
2

This should be it:

https://cloud.google.com/vision/automl/docs/deploy

Note, the export options (at least currently) do not appear on your already trained models. You have to select one of the models, train and then you get the option to either leave the model in the cloud or generate an edge version.

You can export an image classification model in either generic Tensorflow Lite format, Edge TPU compiled TensorFlow Lite format, or TensorFlow format to a Google Cloud Storage location using the ExportModel API.

Meelpeer
  • 21
  • 3
  • Welcome to Stack Overflow! While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. – Johan May 23 '19 at 11:28
0

it is not yet possible to export the models from AutoML. @Infinite Loops, automl and ml engine are different products.

Awais
  • 21
  • 6