1

I've been trying 2 years(not continuously, moving to other things and then going back) to compile and run a keras.applications based model in the vision kit, i have unsuccessfully tried lots of approaches(even forgot some), asked questions in forums, in the official project repo, stack overflow, etc with no luck, for example:

After asking many different questions i thought maybe posting directly my use case and asking directly for that could be more promising:

I need to compile and run a keras.applications based model to run it in the aiy vision kit, i know the device is limited so i'm trying to use a model that the documentation says is supported in the vision kit: MobilenetV2 and i'm doing transfer-learning through freezing some of the layers and removing others from keras.applications.MobileNetV2 and then adding custom trainable layers, for testing also i'm trying VGG16, had many issues in the past but most recent issues are:

  1. Even by just using the first layers of the pre-trained model and discarding the rest(the exported .pb file is small, around 2.5 mb) i get(for vgg16,this happens in my computer at compilation time, not in the raspberry device) : Not enough on-device memory to run model.

  2. For MobileNetv2 even when the documentation says it is supported i get: Check failed: other_op->type == OperatorType::kTensorFlowMerge

Any suggestions for my case? or it's simply just impossible to run keras.applications based model in the vision kit? If it's impossible is it possible to combine the tf for poets mobilenet .pb file with keras output layers and compile that one?

Would really appreciate some help with this or at least a definitive: "no it's not possible" so i don't keep pursuing something that is just not possible.

Luis Leal
  • 3,388
  • 5
  • 26
  • 49

0 Answers0