7

In my Android app, I am capturing a screenshot programmatically from a background service. I obtain it as a Bitmap.

Next, I obtain the co-ordinates of a region of interest (ROI) with the following Android framework API:

Rect ROI = new Rect();
viewNode.getBoundsInScreen(ROI);

Here, getBoundsInScreen() is the Android equivalent of the Javascript function getBoundingClientRect().

A Rect in Android has the following properties:

rect.top
rect.left
rect.right
rect.bottom
rect.height()
rect.width()
rect.centerX()    /* rounded off to integer */
rect.centerY()
rect.exactCenterX()    /* exact value in float */
rect.exactCenterY()

What does top, left, right and bottom mean in Android Rect object

Whereas a Rect in OpenCV has the following properties

rect.width
rect.height
rect.x    /* x coordinate of the top-left corner  */
rect.y    /* y coordinate of the top-left corner  */

Now before we can perform any OpenCV-related operations, we need to transform the Android Rect to an OpenCV Rect.

Understanding how actually drawRect or drawing coordinates work in Android

There are two ways to convert an Android Rect to an OpenCV Rect (as suggested by Karl Phillip in his answer). Both generate the same values and both produce the same result:

/* Compute the top-left corner using the center point of the rectangle. */
int x = androidRect.centerX() - (androidRect.width() / 2);
int y = androidRect.centerY() - (androidRect.height() / 2);

// OR simply use the already available member variables:
x = androidRect.left;
y = androidRect.top;

int w = androidRect.width();
int h = androidRect.height();

org.opencv.core.Rect roi = new org.opencv.core.Rect(x, y, w, h);

Now one of the OpenCV operations I am performing is blurring the ROI within the screenshot:

Mat originalMat = new Mat();
Bitmap configuredBitmap32 = originalBitmap.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(configuredBitmap32, originalMat);
Mat ROIMat = originalMat.submat(roi).clone();
Imgproc.GaussianBlur(ROIMat, ROIMat, new org.opencv.core.Size(0, 0), 5, 5);
ROIMat.copyTo(originalMat.submat(roi));

Bitmap blurredBitmap = Bitmap.createBitmap(originalMat.cols(), originalMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(originalMat, blurredBitmap);

This brings us very close to the desired result. Almost there, but not quite. The area just BENEATH the targeted region is blurred.

For example, if the targeted region of interest is a password field, the above code produces the following results:

On the left, Microsoft Live ROI, and on the right, Pinterest ROI:

As can be seen, the area just below the ROI gets blurred.

So my question is, finally, why isn't the exact region of interest blurred?

  • The co-ordinates obtained through the Android API getBoundsInScreen() appear to be correct.
  • Converting an Android Rect to an OpenCV Rect also appears to be correct. Or is it?
  • The code for blurring a region of interest also appears to be correct. Is there another way to do the same thing?

N.B: I've provided the actual, full-size screenshots as I am getting them. They have been scaled down by 50% to fit in this post, but other than that they are exactly as I am getting them on the Android device.

Yash Sampat
  • 30,051
  • 12
  • 94
  • 120
  • I recommend using markup to resize the images so the question is more pleasing to the eyes. – karlphillip Feb 24 '20 at 13:40
  • @karlphillip: good idea. However, I've provided the actual, full-size screenshots as I am getting them. They have been scaled down by 50% to fit in this post, but other than that they are exactly as I am getting them on the Android device. I don't want to miss out on any context that may be creating this problem. – Yash Sampat Feb 24 '20 at 14:14
  • There, I've edited the question to improve formatting. You can still right-click the images and open them in a new tab to see the full resolution image. Edit the question and observe the markup code used to resize the images. It dramatically improves the readability of the question. – karlphillip Feb 24 '20 at 16:06

2 Answers2

4

If I'm not mistaken, OpenCV's Rect assumes that x and y specify the top left corner of the rectangle:

/* Compute the top-left corner using the center point of the rectangle
 * TODO: take care of float to int conversion
 */
int x = androidRect.centerX() - (androidRect.width() / 2);
int y = androidRect.centerY() - (androidRect.height() / 2);

// OR simply use the already available member variables:
x = androidRect.left;
y = androidRect.top;

int w = androidRect.width();
int h = androidRect.height();

org.opencv.core.Rect roi = new org.opencv.core.Rect(x, y, w, h);
karlphillip
  • 92,053
  • 36
  • 243
  • 426
  • Thank you Karl for your time & efforts! I just have one more problem: the co-ordinates calculation appears to be close but not quite the desired result. Please have a look at the screenshots in **UPDATED** question. I'm trying to blur/distort password field in the screenshot. With the above coordinates, always an area BENEATH the actual ROI is blurred. And the result of the two ways to calculate x, y which you've given both produce the same results. Could you please provide some direction in this regard? Thank you and sorry for the trouble .... :) – Yash Sampat Feb 24 '20 at 07:27
  • If you draw a small circle at those coordinates you will be able to debug the cropping calculation. I can't go further than this. Good luck! – karlphillip Feb 24 '20 at 09:36
  • Thank you for your efforts thus far, it was very helpful. I'll debug this in the manner advised by you. I've modified the question and added a bounty, in case you find time for this later ... thanks again!! :) – Yash Sampat Feb 24 '20 at 13:00
  • @Y.S it might be that android image starts from bottom left not from top left as it is in open cv. Try to draw a rect with (10,10, 10,1000) and have a look where is it drawn. If my assumption is correct something like coord.y = height - coord.y may work – hagor Feb 26 '20 at 22:48
  • @hagor wow that's a mind-bending notion indeed! While I'm considering your viewpoint, I just wanna mention that as per [this](https://stackoverflow.com/a/26253377/3287204) our initial assumption appears to be correct ... :) – Yash Sampat Feb 27 '20 at 06:42
1

As per ScreenShots the value you get fot rect.x is not same for opencv rect. Because android rect center x value get from pixel density of screen while opencv rect value react image pixel row and column. if u find height of image and total rows of original mat both are different,but for perfect place rect they should be same so u have to multiply distance with some constant value to get accurate distance of rect.