I created my model using Keras with transfer learning on IncpetionV3, and exported it to a .pb file using the following python code:
MODEL_NAME = 'Model_all1'
def export_model(saver, model, input_node_names, output_node_name):
tf.train.write_graph(K.get_session().graph_def, 'out_all2', MODEL_NAME + '_graph.pbtxt')
saver.save(K.get_session(), 'out_all2/' + MODEL_NAME + '.chkp')
freeze_graph.freeze_graph('out_all2/' + MODEL_NAME + '_graph.pbtxt', None,
False, 'out_all2/' + MODEL_NAME + '.chkp', output_node_name,
"save/restore_all", "save/Const:0",
'out_all2/final_' + MODEL_NAME + '.pb', True, "")
print("graph saved!")
export_model(tf.train.Saver(), model, ["input_3"], "dense_6/Softmax")
I then attempt to load my model into my Android application. For my application I have used the following codes to preprocess my image before sending it to .pb model. The Bitmap comes from the camera on my phone.
//scaled the bitmap down
Bitmap bitmap = Bitmap.createScaledBitmap(imageBitmap, PIXEL_WIDTH, PIXEL_WIDTH, true);
float pixels[] = getPixelData(bitmap);
public static float[] getPixelData(Bitmap imageBitmap) {
if (imageBitmap == null) {
return null;
}
int width = imageBitmap.getWidth();
int height = imageBitmap.getHeight();
int inputSize = 299;
int imageMean = 155;
float imageStd = 255.0f;
int[] pixels = new int[width * height];
float[] floatValues = new float[inputSize * inputSize * 3];
imageBitmap.getPixels(pixels, 0, imageBitmap.getWidth(), 0, 0, imageBitmap.getWidth(), imageBitmap.getHeight());
for (int i = 0; i < pixels.length; ++i) {
final int val = pixels[i];
floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 2] = ((val & 0xFF) - imageMean) / imageStd;
}
return floatValues;
}
Below shows my recognise image code to link to my loaded .pb file on Android
public ArrayList<Classification> recognize(final float[] pixels) {
//using the interface
//input size
tfHelper.feed(inputName, pixels, 1, inputSize, inputSize, 3);
//get the possible outputs
tfHelper.run(outputNames, logStats);
//get the output
tfHelper.fetch(outputName, outputs);
// Find the best classifications.
PriorityQueue<Recognition> pq =
new PriorityQueue<Recognition>(
3,
new Comparator<Recognition>() {
@Override
public int compare(Recognition lhs, Recognition rhs) {
// Intentionally reversed to put high confidence at the head of the queue.
return Float.compare(rhs.getConfidence(), lhs.getConfidence());
}
});
for (int i = 0; i < outputs.length; ++i) {
if (outputs[i] > THRESHOLD) {
pq.add(
new Classifier.Recognition(
"" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i], null));
}
}
final ArrayList<Recognition> recognitions = new ArrayList<Recognition>();
int recognitionsSize = Math.min(pq.size(), MAX_RESULTS);
for (int i = 0; i < recognitionsSize; ++i) {
recognitions.add(pq.poll());
}
Trace.endSection(); // "recognizeImage"
//fit into classification list
ArrayList<Classification> anslist = new ArrayList<>();
for (int i = 0; i < recognitions.size(); i++) {
Log.d("classification",recognitions.get(i).getTitle() +" confidence : "+ recognitions.get(i).getConfidence());
Classification ans = new Classification();
ans.update(recognitions.get(i).getConfidence(),recognitions.get(i).getTitle());
anslist.add(ans);
}
return anslist;
}
From my testing, before I generated my frozen graph model, .pb file. The accuracy of my model is quite high. However, when I load it unto my Android app, the prediction results return from my model on Android are all over the place.
I have been testing for a long time and I am unable to find my problem. Does anyone have any insights? Did I generate the wrong .pb file? Or did I send the image wrongly to the frozen graph? I am stumped.