12

I'm new to tensorflow.js and tensorflow

The context : We have trained a model using custom vision to recognized from an image, the hairlength : short, mid, long. This model was exported and we would like to use it in local with tensorflow js. The exported files from custom vision are a *.pb file and a labels.txt file.

I have used tensorflowjs_converter python script, here is the command I have used to convert a frozen model *.pb in a json model :

tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='model_outputs' --output_json OUTPUT_JSON C:\python\tf_models\hairlength\model.pb C:\python\tf_models\exports\

Then I paste this model.json and shards in the assets folder of my angular client. Then I try to load the model and I give him an image to get the prediction but all I get are indexes values that are way out of bound since I only need 0: long, 1: mid, 2: short hairlength. Here is a capture of the console prediction list

This is the class I have used in my client (typescript) for predictions:

import * as tf from '@tensorflow/tfjs';

// import {HAIRLENGTH_LABELS} from './hairlength';
import { FrozenModel } from '@tensorflow/tfjs';

const MODEL = 'assets/models/hairlength/model.json';
const INPUT_NODE_NAME = 'model_outputs';
const OUTPUT_NODE_NAME = 'model_outputs';
const PREPROCESS_DIVISOR = tf.scalar(255 / 2);

export class MobileNetHairLength {

  private model: FrozenModel;
  private labels = ['long', 'mid', 'short'];

  constructor() {}

  async load(){
    this.model = await tf.loadGraphModel(MODEL);
  }

  dispose() {
    if (this.model) {
      this.model.dispose();
    }
  }

  /**
   * Infer through MobileNet. This does standard ImageNet pre-processing before
   * inferring through the model. This method returns named activations as well
   * as softmax logits.
   *
   * @param input un-preprocessed input Array.
   * @return The softmax logits.
   */
  predict(input) {
    const preprocessedInput = tf.div(
        tf.sub(input, PREPROCESS_DIVISOR),
        PREPROCESS_DIVISOR);
    const reshapedInput =
        preprocessedInput.reshape([1, ...preprocessedInput.shape]);
    // tslint:disable-next-line:no-unused-expression
    return this.model.execute({[INPUT_NODE_NAME]: reshapedInput}, OUTPUT_NODE_NAME);
  }

  getTopKClasses(logits, topK: number) {
    const predictions = tf.tidy(() => {
      return tf.softmax(logits);
    });

    const values = predictions.dataSync();
    predictions.dispose();

    let predictionList = [];
    for (let i = 0; i < values.length; i++) {
      predictionList.push({value: values[i], index: i});
    }
    predictionList = predictionList
                         .sort((a, b) => {
                           return b.value - a.value;
                         })
                         .slice(0, topK);

    console.log(predictionList);
    return predictionList.map(x => {
      return {label: this.labels[x.index], value: x.value};
    });
  }
}

And this the class that calls the above one, I just give the canvas element :

import 'babel-polyfill';
import * as tf from '@tensorflow/tfjs';
import { MobileNetHairLength } from './mobilenet-hairlength';

export class PredictionHairLength {

  constructor() {}

  async predict(canvas) {
    const mobileNet = new MobileNetHairLength();
    await mobileNet.load();
    const pixels = tf.browser.fromPixels(canvas);

    console.log('Prediction');
    const result = mobileNet.predict(pixels);
    const topK = mobileNet.getTopKClasses(result, 3);

    topK.forEach(x => {
      console.log( `${x.value.toFixed(3)}: ${x.label}\n` );
    });

    mobileNet.dispose();
  }
}

My questions are :

Is the convert python command correct ?

Did I miss something in my client to get the correct indexes ?

Thank you for your time and answers

If you need more informations, I would be glad to give them to you

Updates 10/03/2019

I did update tensorflowjs to 1.0.0 using npm

I saw that FrozenModel are now deprecated

Exporting my custom vision model is giving me a model.pb and labels.txt files like this : custom vision exports

I have tried using these files with python everything works fine... I would like now to convert this model.pb file to model.json file to use it with tensorflowjs, for this I need to use tensorflowjs_converter, the problem is that the file structure to convert this savedmodel is invalid see : https://www.tensorflow.org/guide/saved_model#structure_of_a_savedmodel_directory

The only thing working is if I use frozen_model format in the converter and give as ouput node name : loss... like this tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='loss' --output_json OUTPUT_JSON C:\python\tf_models\hairlength\model.pb C:\python\tf_models\exports\

These are the ouputs I get when running the above command : output files then I load the model, here is my code to load and predict using the exported json model (I have use predict() and remove the input and ouput nodes like you advised me):

import * as tf from '@tensorflow/tfjs';
import { GraphModel } from '@tensorflow/tfjs';

const MODEL = 'assets/models/hairlength/model.json';
// const INPUT_NODE_NAME = 'Placeholder';
// const OUTPUT_NODE_NAME = 'loss';
const PREPROCESS_DIVISOR = tf.scalar(255 / 2);

export class MobileNetHairLength {

  private model: GraphModel;
  private labels = ['long', 'mid', 'short'];

  constructor() {}

  async load() {
    this.model = await tf.loadGraphModel(MODEL);
  }

  dispose() {
    if (this.model) {
      this.model.dispose();
    }
  }

  /**
   * Infer through MobileNet. This does standard ImageNet pre-processing before
   * inferring through the model. This method returns named activations as well
   * as softmax logits.
   *
   * @param input un-preprocessed input Array.
   * @return The softmax logits.
   */
  predict(input: tf.Tensor<tf.Rank>) {
    const preprocessedInput = tf.div(
      tf.sub(input.asType('float32'), PREPROCESS_DIVISOR),
      PREPROCESS_DIVISOR);
    const reshapedInput =
      preprocessedInput.reshape([...preprocessedInput.shape]);
    return this.model.predict(reshapedInput);
  }

  getTopKClasses(logits, topK: number) {
    const predictions = tf.tidy(() => {
      return tf.softmax(logits);
    });

    const values = predictions.dataSync();
    predictions.dispose();

    let predictionList = [];
    for (let i = 0; i < values.length; i++) {
      predictionList.push({value: values[i], index: i});
    }
    predictionList = predictionList
                         .sort((a, b) => {
                           return b.value - a.value;
                         })
                         .slice(0, topK);

    console.log(predictionList);
    return predictionList.map(x => {
      return {label: this.labels[x.index], value: x.value};
    });
  }
}

And the calling class is this one :

import 'babel-polyfill';
import * as tf from '@tensorflow/tfjs';
import { MobileNetHairLength } from './mobilenet-hairlength';

export class PredictionHairLength {

  constructor() {}

  async predict(canvas) {
    // Convert to tensor
      const mobileNet = new MobileNetHairLength();
      await mobileNet.load();
      const imgTensor = tf.browser.fromPixels(canvas);
      console.log(imgTensor);
      // Init input with correct shape
      const input = tf.zeros([1, 224, 224, 3]);
      // Add img to input
      input[0] = imgTensor;

      console.log('Prediction');
      const result = mobileNet.predict(input);
      console.log(result);

      const topK = mobileNet.getTopKClasses(result, 3);

      topK.forEach(x => {
        console.log( `${x.value.toFixed(3)}: ${x.label}\n` );
      });

      mobileNet.dispose();
  }
}

Then sending a canvas element taken from a webcam stream gives me this error : js error

How could I run the converter command with the format 'saved model' since the file structure is wrong ?

Why do I get 'Failed to compile fragment shader error, infinity : undeclared identifier in tf-core.esm' ?

Thank you for your time and answers

RLoris
  • 526
  • 5
  • 14
  • Here are some more informations about the files that custom vision exports : There are only 2 files : models.pb and labels.txt containing only the tags long mid short, this means that when I use tensorflowjs_converter and give as input format saved model, I have not the correct save model structure pointed like this : https://www.tensorflow.org/guide/saved_model#structure_of_a_savedmodel_directory here is the command I try to execute : tensorflowjs_converter --input_format=tf_saved_model --output_json OUTPUT_JSON C:\python\tf_models\hairlength\ C:\python\tf_models\exports\ – RLoris Mar 08 '19 at 10:40
  • OK, this question would have been much clearer if you had linked to https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/. A 'custom vision model' could be anything. It appears that the Azure service indeed exports only frozen models, not SavedModels. – David Soergel Mar 18 '19 at 21:27
  • In any case, the loss is not the output of the model. You'll need to figure out the name of the predictions tensor. Maybe it's 'model_outputs', per https://stackoverflow.com/questions/49840929/use-azure-custom-vision-trained-model-with-tensorflow-js/54863614#54863614? See also https://github.com/MicrosoftDocs/azure-docs/issues/7192. – David Soergel Mar 18 '19 at 21:35
  • See also: https://github.com/tensorflow/tfjs/issues/1379 – David Soergel Mar 19 '19 at 03:32

2 Answers2

1

I think it's a simple bug in your code: const INPUT_NODE_NAME = 'model_outputs'; should probably be 'model_inputs' or whatever it actually is. Here, you're setting the output to be the input image and then reading it back without predicting anything.

David Soergel
  • 1,709
  • 1
  • 11
  • 15
  • Thank you for your answer, I have tried but nothing with 'model_inputs' but looking in java code using a json model, I have seen that they use as input node name : 'Placeholder' and as ouput node names : 'loss', I have tried to execute the model but an error is thrown : 'failed to compile fragment shader' – RLoris Mar 07 '19 at 10:05
  • Let's back up a few steps and rerun the converter. The input format tf_frozen_model is deprecated; please grab the latest version and use tf_saved_model instead. You don't need to specify the output node names at all, either in the converter command line or in your code-- the default signature of the SavedModel will be used. Similarly you don't need to worry about the input name; just pass a single Tensor to predict() (not execute()). – David Soergel Mar 07 '19 at 18:13
  • Hey thank you for your answer see my updated question using your advices. – RLoris Mar 10 '19 at 12:57
0

Also bumped into compile failed shader. Run it on other, more powerful computer, the problem disappear.

It seems to me, that chrome doesn't have enough resources to succeed

V.Panichkin
  • 27
  • 1
  • 5