5

enter image description here

** I'M AWARE OF SIMILAR QUESTIONS!! **

My question is for my particular situation... I used Google Vision to train my own model to detect custom objects. I've come across similar errors about shape in the past and I resolved them by reshaping my input image.

This particular error is telling me that my shape must be an empty array or empty shape. Is that even possible? If this is not a glitch, how do i resolve it?

This is how I resolved previous errors in other projects when it complains about shape. This solution does not work for empty array/shape

    const model = await autoML.loadObjectDetection('./model/model.json');
 // const model = await tfjs.loadGraphModel('./model/model.json');
    await tfjs.ready();
    const tfImg = tfjs.browser.fromPixels(videoElement.current).expandDims(0);
    const smallImg = await tfjs.image.resizeBilinear(tfImg, [224, 224]);
    const resized = tfjs.cast(smallImg, 'float32');
    const t4d = tfjs.tensor4d(Array.from(resized.dataSync()), [1, 224, 224, 3]);
    const predictions = await modelRef.current.detect(tfImg, options);
TonyCruze
  • 441
  • 4
  • 12
  • Could you include the input shape of the model from the model.json? – yudhiesh Sep 16 '20 at 05:03
  • @yuRa Here's model.json https://pastebin.com/qNru5i7y – TonyCruze Sep 16 '20 at 17:46
  • SInce I created this post, I've trained my model inside the vision dashboard. This second model works right but needs more training. So I went to train a third model and it's doing the same error as the 1st. "must be []" - It seems the dashboard is producing bad models or its mangled during the conversion/export process. – TonyCruze Sep 16 '20 at 17:49
  • 1
    There must be some issue when converting the model cause even in the model.json the input shape is "tensorShape": {"dim": [{"size": "-1"}, {"size": "-1"}, {"size": "-1"}, {"size": "3"}]}}} which does not make sense – yudhiesh Sep 17 '20 at 03:11
  • 1
    Thanks for clarifying that. Being new to this I wasn't sure my assumption was correct. Also, I notice symbols or the letter 'd' next to all of my labels in dict.txt. It seems as if something was processing when the 3 hrs of training was up and google vision decided to end the training for my model. TWICE – TonyCruze Sep 17 '20 at 03:24
  • 1
    I would suggest opening this issue on the github page of tensorflow.js or google vision. – yudhiesh Sep 17 '20 at 04:22
  • I have the same this problem.@TonyCruze Have you got answer for this question? – D T Sep 18 '20 at 06:55
  • 2
    @DT nothing yet...I'll reach out to the automl vision team at google and see if they have a solution. – TonyCruze Sep 18 '20 at 07:10
  • 1
    I'm having the same issue. My suspicion is that the Vision team pushed an update with a bug in it. From what I can debug, the issue is with the method by which you load the model.json. I used a model trained step by step from this [article](https://cloud.google.com/vision/automl/object-detection/docs/tensorflow-js-tutorial). Same error code as you. When loading the model from the mentioned article's [given URL](https://storage.googleapis.com/tfjs-testing/tfjs-automl/object_detection/model.json), it works. When loading from relative (or absolute) path, I get the `must be []` error. Please let m – Spenco100 Sep 19 '20 at 09:06
  • I also have the same this problem. @TonyCruze any news? Thanks! – Federico G Oct 03 '20 at 02:01

1 Answers1

1

I had the same problem with my tfjs models from Vision AI when I exported them to tfjs models and followed this article to load the model.

Workaround:

As a workaround I exported the model from Vision AI to the SavedModel format and converted it to tfjs model with tensorflow_converter following this guide. The result can be loaded as expected and works fine.

Vale
  • 153
  • 1
  • 1
  • 7
  • I'll have to give this a try. Out of pure curiosity. Which model loader did you use in the end? The AutoML model loader or one of the two tsjs model loaders? One is called graph loader and the other is named something that escapes me at the moment. – TonyCruze Oct 04 '20 at 07:56
  • 1
    I used `this.model = await automl.loadObjectDetection( "/models/mymodel/model.json" ); const predictions = await this.model.detect(this.imageObject); ` – Vale Oct 04 '20 at 08:31
  • Can you share the exact commands you ran to convert the SavedModel to Tensorflowjs model that is compatible with automl.loadObjectDetection? – user896993 Oct 13 '20 at 13:05
  • When running `tensorflowjs_converter \ --input_format=tf_saved_model \ --output_format=tfjs_graph_model \ --signature_name=serving_default \ --saved_model_tags=serve \ /tmp \ /tmp` I'm getting this error `ValueError: Unsupported Ops in the model before optimization LookupTableFindV2, DecodeJpeg, HashTableV2` – user896993 Oct 13 '20 at 14:03
  • @user896993 You can set the flag: --skip_op_check=SKIP_OP_CHECK (or accept the option in the wizard) – Federico G Nov 12 '20 at 02:24