3

I am developing an app in ionic3 framework for recognizing drawn characters and I am having trouble with the import of the model. I have imported the model from Keras (converted with tensorflowjs_converter) into my ionic3 app in two different ways:

  1. The model.json and weight files (shards) are placed into the folder /assets/models.
  2. The model.json and weight files (shards) are being hosted in firebase storage.

When launching the app in the browser with the first method, the model and weights are correctly loaded and I am able to predict the classes. But when launching the app, with the same method into my Android device with ionic cordova run android --device, the model seems to not retrieve the data from the weight files as it gives the following error:

Based on the provided shape, [3, 3, 32, 64], the tensor should have 18432 values but has 917.

Now, I tried to host the files in firebase storage to try and fix this issue. I retrieve the model.json from storage and I still get the same error as stated above in both browser and device.

From the experience of storing the shards and model locally in the app, I came to the conclusion that the shards are not being recognized in the device either both ways.

Also, when using the firebase storage method in device, when trying to fetch the model from the url, I catch the following error: Failed to fetch.

Here is the code of retrieving the shards and the model:

const modelURL: string = await this.db.getModel();
const shards: string[] = await this.db.getShards();

modelURL and shards contains the download urls from firebase storage. The model and the shards are kept together on the same level:

/* Firebase Storage hierarchy */

https://firebasestorage.googleapis.com/v0/b/project-foo.com/o/model%2Fmodel.json?alt=media&token=******
https://firebasestorage.googleapis.com/v0/b/project-foo.com/o/model%2Fgroup1-shard1of4?alt=media&token=******
https://firebasestorage.googleapis.com/v0/b/project-foo.com/o/model%2Fgroup1-shard2of4?alt=media&token=******
https://firebasestorage.googleapis.com/v0/b/project-foo.com/o/model%2Fgroup1-shard3of4?alt=media&token=******
https://firebasestorage.googleapis.com/v0/b/project-foo.com/o/model%2Fgroup1-shard4of4?alt=media&token=******

So with that, I pass the download url of the model into tf.loadModel:

import * as tf from '@tensorflow/tfjs';    

await tf.loadModel(modelURL).then(model => {
    const output: any = model.predict(img);
});

So, is there any way to pass the shards into tf.loadModel(), fetched from firebase storage, so that in my device and browser I can retrieve all of the data required in order to predict from the model?

Thank you for your help.

Gabriel Garcia
  • 193
  • 2
  • 10
  • I just came to a GitHub issue on this [ https://github.com/tensorflow/tfjs/issues/272 ]. It seems that when saving the model locally, in browser goes well, **but on device it is not yet supported**. Meanwhile, in the issue [ https://github.com/tensorflow/tfjs/issues/410 ] it is stated that the loadModel on Node.js (devices - native) does not work because of the fetch being missing in node when trying to get the model from an url. There is a workaround but I do not understand it though. Hope this helps. – Gabriel Garcia Jan 13 '19 at 00:40

1 Answers1

1

The http loader for tf.loadModel() call assumes model.json and the corresponding weight files (group1-shard1of1,...) share the same url path prefix. For example: given model file located at: https://foo.bar/path/model.json the loader with try to retrieve weight files at: https://foo.bar/path/group1-shard1of1, ...

In your case const modelURL: string = await this.db.getModel(); const shards: string[] = await this.db.getShards();

If the modelUrl and shards do not share the same path, you might need to create you own BrowserHttp IOHandler for loading: const model = await tf.loadModel(new MyOwnHttpIOLoader(modelUrl, shards));

If they do, you might be able to align them by editing the model.json file manually. In the model.json file, there is an array of weight file paths.

With the firebase storage the problem is the url of the model file is: https://firebasestorage.googleapis.com/v0/b/project-foo.com/o/model%2Fmodel.json which has the path firebasestorage.googleapis.com/v0/b/project-foo.com/o The loader will use the path and try to load the weight file at firebasestorage.googleapis.com/v0/b/project-foo.com/o/group1-shard1of4. But it does not match your weight url firebasestorage.googleapis.com/v0/b/project-foo.com/o/model%2Fgroup1-shard1of4, it is missing the model%2F prefix.

In order to make the loader work, you can manually update the model.json to add the prefix. Search the file for "weightsManifest", edit the "paths" array to be something like following ["model%2Fgroup1-shard1of4", ...]

Ping Yu
  • 392
  • 1
  • 3
  • Thank you for answering! I updated the question with your request. I've written the paths from my storage from where I retrieve the model and shards. As you can see, they are on the same level. – Gabriel Garcia Jan 16 '19 at 14:58
  • you need to convert the gs:// url to http url, I think this SO https://stackoverflow.com/questions/41763973/firebase-storage-gs-url-to-http might be useful. Basically the idea is that current loadModel api looks at the protocol of the model file link and determine how to load the file. The gs:// is not current supported. By converting it to https://, it should be loaded without creating your own IOHandler. – Ping Yu Jan 16 '19 at 17:16
  • It would awesome If you can add a new IOHandler for loading and saving of gs files and contribute to tensorflow.js. – Ping Yu Jan 16 '19 at 17:21
  • Actually I am retrieving the url with the built-in function from firebase storage `.getDownloadURL()` which gets you the download url with `https://` format already. I will update my question, sorry for the inconvenience. – Gabriel Garcia Jan 16 '19 at 18:14
  • So, as you said, it should be retrieving the model as expected on device but for some reason, the `failed to fetch` error pops up. – Gabriel Garcia Jan 16 '19 at 18:24