0

I made my object recognition model by following this example: https://www.tensorflow.org/tutorials/images/classification
I create this model using Colab, and now I have, from Colab, a model in .py and .ipynb


Using this istruction I save my model in .h5:
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['binary_accuracy'])
model.save('./modelname.h5')


Now, I can try to save this model in another format, I use this code and save it in .json and with group1-shard°of°.bin files:
!pip install tensorflowjs
!mkdir model
!tensorflowjs_converter --input_format keras modelname.h5 model/
!zip -r modelname.zip model


Now my goal is to be able to load this model into my webapp, in Javascript, and use it to recognize images
The problem is being able to load the model
Any solution?

UPDATE
I'm using a screenshot of my page view as an image to recognize
This is my fundamental part of the code:

async function LoadModel() {
    Model = await tf.loadLayersModel('http://localhost/..../model.json'); //caricamento mio modello
    console.log('conferma caricamento modello ' + Model);
try {
    maxPredictions = Model.getTotalClasses();
    console.log("durante");
}
catch (e){}
if (Model) {
    //controllo caricamento modello
    console.log(Model);
}
console.log("dopo e modello " + Model);
}

Then

OriginImage.onload = function (event) {
try { 
document.createEvent("TouchEvent"); 
var width = document.body.clientWidth;
}
catch(e) { 
var width = ResizeImageWidth;
} 
if (OriginImage.height<OriginImage.width) {
    var height = width*OriginImage.height/OriginImage.width; 
    }
else {
    var height = width;
    width = height*OriginImage.width/OriginImage.height; 
    }
ResizeImage.width = width;
ResizeImage.height = height;  
ResizeImage.src = OriginImage.src;
}    

This is resize

ResizeImage.onload = function (event) {
if (Model) recognizeImage(ResizeImage);
}

And this is recognizeImage

async function recognizeImage(Image) {
var cont;
var data = "";
var maxClassName = "";
var maxProbability = "";
const prediction = await Model.predict(Image);
for (let i = 0; i < maxPredictions; i++) {
    if (i==0) {
        maxClassName = prediction[i].className;
        maxProbability = prediction[i].probability;
    }
    else if (prediction[i].probability>maxProbability) {
        maxClassName = prediction[i].className;
        maxProbability = prediction[i].probability;
    }
}
if(maxProbability > 0.90 ) {
        console.log(maxProbability + '  than' + maxClassName);
    return;
} 
else {
    console.log(maxProbability + maxClassName + "Nothing" );
    }
}
edkeveked
  • 17,989
  • 10
  • 55
  • 93
Giali
  • 1
  • 2

2 Answers2

0

You can load it using

tf.loadLayersModel(modelUrl)

On node, the file can be accessed directly. The browser however does not have access to the file system; so the model.json needs to be served by a server first. It has been discussed in this answer

edkeveked
  • 17,989
  • 10
  • 55
  • 93
  • I recived this error `Error: Error when checking model : the Array of Tensors that you are passing to your model is not the size the the model expected. Expected to see 1 Tensor(s), but instead got 0 Tensors(s). at new e (tf.min.js:2) at Bm (tf.min.js:2) at e.predict (tf.min.js:2) at e.predict (tf.min.js:2) at recognizeImage (index.js:528) at HTMLImageElement.ResizeImage.onload (index.js:519)` – Giali Sep 07 '20 at 14:59
  • Could you please update your question with the js code that throws the error ? – edkeveked Sep 07 '20 at 15:14
  • I don't know if it's my mistake More than anything else I'm applying a screen of the page view as an image to recognize,I immediately load the parts of code involved – Giali Sep 07 '20 at 15:48
0

I am using React js for loading model (image classification and more machine learning stuff)

Tensorflow.js do not support an Api to read a previously model trained

    const file= new Blob()
    file.src=modelJSON
    const files= new Blob()
    files.src=modelWeights
    console.log(files)
    const model= await tf.loadLayersModel(tf.io.browserFiles([file, files]));

enter image description here

You be able to create an APi in Express.js for servering your model (model.json and weigths.bin) if you use a web app (for a tensorflow.lite you could use a opencv.readTensorflowmodel(model.pb, weight.pbtxt)

References: How to load tensorflow-js weights from express using tf.loadLayersModel()?

     const classifierModel = await tf.loadLayersModel(            
            "https://rp5u7.sse.codesandbox.io/api/pokeml/classify"
        ); 
        const im = new Image()
            im.src =imagenSample//'../../../../../Models/ShapesClassification/Samples/images (2).png';
        const abc= this.preprocessImage(im);
const preds = await classifierModel.predict(abc)//.argMax(-1);
            console.log('<Response>',preds,'Principal',preds.shape[0],'DATA',preds.dataSync())
            const responde=[...preds.dataSync()]
            console.log('Maxmimo Valor',Math.max.apply(Math, responde.map(function(o) { return o; })))
            let indiceMax = this.indexOfMax(responde)
            console.log(indiceMax)
            console.log('<<<LABEL>>>',this.labelsReturn(indiceMax))
  • Is this an answer to the above question? I can't see connection, and your image seem to indicate you are asking – Ruli Nov 26 '20 at 09:28
  • Hi after many springs I code a Express.js server (references:https://stackoverflow.com/questions/62528719/how-to-load-tensorflow-js-weights-from-express-using-tf-loadlayersmodel). Now my React.js web app is using an axios that made a post petition to my tensorflow.js model hosted in express. – Fernando Sanchez Villanueva Dec 04 '20 at 23:37