I have an android application and I've integrated my own custom tensorflow lite model.
I can print out the object category and probability score but I'm at a loss on how to draw the bounding boxes of the object on the image that is uploaded.
Here is my code that I have where I set the image into a ImageView.
// SET THE IMAGE
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if(requestCode == 100)
{
imageBox.setImageURI(data.getData());
Uri uri = data.getData();
try {
img = MediaStore.Images.Media.getBitmap(this.getContentResolver(), uri);
} catch (IOException e) {
e.printStackTrace();
}
}
}
This is where I am integrating the model.
predictBtn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
img = Bitmap.createScaledBitmap(img, 128, 128, true);
try {
Android model = Android.newInstance(getApplicationContext());
// Creates inputs for reference.
TensorImage image = TensorImage.fromBitmap(img);
// Runs model inference and gets result.
Android.Outputs outputs = model.process(image);
Android.DetectionResult detectionResult = outputs.getDetectionResultList().get(0);
// Gets result from DetectionResult.
float score = detectionResult.getScoreAsFloat();
RectF location = detectionResult.getLocationAsRectF();
String category = detectionResult.getCategoryAsString();
// Releases model resources if no longer used.
model.close();
// here we will print out the results of the object to text views based on the image that is inputted by the user
// we print out object type and its accuracy score
objecttv.setText(category);
scoretv.setText(Float.toString(score));
} catch (IOException e) {
// TODO Handle the exception
}
}
});
What do I need to do to make use of RectF location = detectionResult.getLocationAsRectF();
?
I am using a static image and not actually tracking the movement in realtime.