2

I am new to OpenCV4Android. Here is some code I wrote to detect blue colored blob in an image. Among the following images, image 1 was in my laptop. I run the application and the frame captured by the OpenCV camera is image 2. You can look at the code to see what the rest of the images are. (As you can see in the code, all the images are saved in the SD card.)

I have the following questions:.

  • Why hass the color of the light-blue blob turned out to be light-yellow in the rgba frame captured by the camera (shown in image 2).

  • I created a boundingRect around the largest blue colored blob, but then ROI by doing rgbaFrame.submat(detectedBlobRoi). But you can see in the last image, it just looks like a couple of grey pixels. I was expecting the blue colored sphere separated from the rest of the image.

What am I missing or doing wrong?

CODE:

private void detectColoredBlob () { 
        Highgui.imwrite("/mnt/sdcard/DCIM/rgbaFrame.jpg", rgbaFrame);//check
        Mat hsvImage = new Mat(); 
        Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_RGB2HSV_FULL);
        Highgui.imwrite("/mnt/sdcard/DCIM/hsvImage.jpg", hsvImage);//check

        Mat maskedImage = new Mat(); 
        Scalar lowerThreshold = new Scalar(170, 0, 0); 
        Scalar upperThreshold = new Scalar(270, 255, 255); 
        Core.inRange(hsvImage, lowerThreshold, upperThreshold, maskedImage);
        Highgui.imwrite("/mnt/sdcard/DCIM/maskedImage.jpg", maskedImage);//check

        Mat dilatedMat= new Mat(); 
        Imgproc.dilate(maskedImage, dilatedMat, new Mat() ); 
        Highgui.imwrite("/mnt/sdcard/DCIM/dilatedMat.jpg", dilatedMat);//check
        List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
        Imgproc.findContours(dilatedMat, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
        //Use only the largest contour. Other contours (any other possible blobs of this color range) will be ignored.
        MatOfPoint largestContour = contours.get(0);
        double largestContourArea = Imgproc.contourArea(largestContour);
        for ( int i=1; i<contours.size(); ++i) {//NB Notice the prefix increment.
            MatOfPoint currentContour = contours.get(0);
            double currentContourArea = Imgproc.contourArea(currentContour);
            if (currentContourArea > largestContourArea) {
                largestContourArea = currentContourArea;
                largestContour = currentContour;
            }
        }

        Rect detectedBlobRoi = Imgproc.boundingRect(largestContour);
        Mat detectedBlobRgba = rgbaFrame.submat(detectedBlobRoi);
        Highgui.imwrite("/mnt/sdcard/DCIM/detectedBlobRgba.jpg", detectedBlobRgba);//check
     }
  1. Original Image in computer, This was captured by placing phone's camera infront of laptop screen.

enter image description here

  1. rgbaFrame.jpg

enter image description here

  1. hsvImage.jpg

enter image description here

  1. dilatedImage.jpg

enter image description here

  1. maskedMat.jpg

enter image description here

  1. detectedBlobRgba.jpg

enter image description here


EDIT:

I just used Core.inRange(hsvImage, new Scalar(0,50,40), new Scalar(10,255,255), maskedImage);//3, 217, 225 --- 6, 85.09, 88.24 ...... 3 219 255, and I captured a screeshot of the website colorizer.org by giving it a custom HSV values for red color, i.e. for the OpenCV red Scalar(3, 217, 255) (which falls in the range set in the given inRange function, I scaled the channel values to the scale of colorizer.org, which is H=0-360, S=0-100, V=0-100, by multiplying H value by 2, and dividing both the S and V values by 255 and multiplying by 100. This gave me 6, 85.09, 88.24 which I set on the website, and took a screenshot (the first in the following images).

  1. Original screenshot, I captured this frame.

enter image description here

  1. rgbaFrame.jpg

enter image description here

  1. hsvImage.jpg

enter image description here

  1. maskedImage.jpg

enter image description here

  1. dilatedMat.jpg

enter image description here

  1. detectedBlobRgba.jpg

enter image description here


IMPORTANT:

The method given is actually invoked in my test application when I touch inside the rgbaFrame (i.e. it is invoked inside onTouch method). I am also using the following code to print to a TextView the Hue, Saturation, and Value values of the colored blob that I have touched. When I run this application, I touched the red colored blob, and got the following values: Hue:3, Saturation:219, Value:255.

public boolean onTouch(View v, MotionEvent motionEvent) { detectColoredBlob(); int cols = rgbaFrame.cols(); int rows = rgbaFrame.rows();

int xOffset = (openCvCameraBridge.getWidth() - cols) / 2;
int yOffset = (openCvCameraBridge.getHeight() - rows) / 2;

int x = (int) motionEvent.getX() - xOffset;
int y = (int) motionEvent.getY() - yOffset;

Log.i(TAG, "Touch image coordinates: (" + x + ", " + y + ")");//check

if ((x < 0) || (y < 0) || (x > cols) || (y > rows)) { return false; }

Rect touchedRect = new Rect();
touchedRect.x = (x > 4) ? x - 4 : 0;
touchedRect.y = (y > 4) ? y - 4 : 0;
touchedRect.width = (x + 4 < cols) ? x + 4 - touchedRect.x : cols - touchedRect.x;
touchedRect.height = (y + 4 < rows) ? y + 4 - touchedRect.y : rows - touchedRect.y;
Mat touchedRegionRgba = rgbaFrame.submat(touchedRect);

Mat touchedRegionHsv = new Mat();
Imgproc.cvtColor(touchedRegionRgba, touchedRegionHsv, Imgproc.COLOR_RGB2HSV_FULL);

double[] channelsDoubleArray = touchedRegionHsv.get(0, 0);//**********
float[] channelsFloatArrayScaled = new float[3];
for (int i = 0; i < channelsDoubleArray.length; i++) {
    if (i == 0) {
        channelsFloatArrayScaled[i] = ((float) channelsDoubleArray[i]) * 2;// TODO Wrap an ArrayIndexOutOfBoundsException wrapper
    } else if (i == 1 || i == 2) {
        channelsFloatArrayScaled[i] = ((float) channelsDoubleArray[i]) / 255;// TODO Wrap an ArrayIndexOutOfBoundsException wrapper
    }
}

int androidColor = Color.HSVToColor(channelsFloatArrayScaled);

view.setBackgroundColor(androidColor);
textView.setText("Hue : " + channelsDoubleArray[0] + "\nSaturation : " + channelsDoubleArray[1] + "\nValue : "
        + channelsDoubleArray[2]);

touchedRegionHsv.release();
return false; // don't need subsequent touch events 

}

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
Solace
  • 8,612
  • 22
  • 95
  • 183
  • 2
    opencv divides all hue values by 2 to fit 360 degree to 1 byte. So basically whole hue range is within 0..180 in openCV. you should use some minimum s and v channel values too to reduce influence of noise, but the hue thing will be most important. – Micka Dec 09 '15 at 07:59
  • @Micka Thank you. I tried the range suggested by Haris in their answer, which had a range for s and v channels also, and this has considerably reduced the noise (as you can see in the edit). But alas, the problems are still there: (1) Why is the red in the original image converted to bluish in rgbaFrame.jpg. (2) Why is the submat created from ROI stored in detectedBlobRgba.jpg so small? Shouldn't it be the red big blob? – Solace Dec 09 '15 at 10:47
  • 1
    didn't check your code, but if the image really is RGBa then it will be displayed wrong because openCV uses BGRa ordering for displaying and saving (not sure about java/android/python API). You can use cvtColor function to convert from rgb to bgr. Use BGR2HSV or RGB2HSV flags accordingly. – Micka Dec 09 '15 at 10:53
  • 1
    about (2): isn't the biggest part of the masked area in detectedBlobRgba.jpg the red blob of the original image?!? – Micka Dec 09 '15 at 11:12
  • 1
    so if my earlier comment wasn't clear: about (1): convert from RGBA to BGR before saving or displaying the image will probably solve the rgbaFrame.jpg problem! – Micka Dec 09 '15 at 11:13
  • @Micka About (1), I tried to change the flag from `Imgproc.COLOR_RGB2HSV_FULL` to `Imgproc.Imgproc.COLOR_BGR2HSV_FULL`, but that did not change anything(red in original image still appears bluish in rgbaFrame). Then I thought may be I have to use `cvtColor` to convert `rgbaFrame` to BGR format when it is created in the `onCameraFrame` function by `rgbaFrame = inputFrame.rgba()`, but [here in the documentation](http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor) they have not mentioned a conversion from RGB to BGR. – Solace Dec 09 '15 at 11:46
  • @Micka About your response to (2), Yes! But I think you are thinking the captions in my question belong to the images above them. It is actually the opposite. The image detectedBlobRgba.jpg in the question is below the label/caption '6. detectedBlobRgba'. This image is that bold-blackish-dot below the caption. It is that small (and dubious). – Solace Dec 09 '15 at 11:49
  • 1
    please try it anyways, as you can see the red part looks blue now which is clearly bgr vs rgb confusion – Micka Dec 09 '15 at 11:50
  • 1
    about (2): in your input image you see the blue big square which should be red. but since your hsv was (probably) computed from rgb2hsv correctly, that square should have "red color" hue values correctly. since you used inRange value above 170 (which is 340 in real hue range) you ask for red color near 360 degree and that's what that square might be (couldnt test myself) – Micka Dec 09 '15 at 11:55
  • about the small blob image: didnt check your code, is that problem of priority? maybe an error in your contour extraction/interpretation – Micka Dec 09 '15 at 11:58
  • 1
    contour loop has an error, you always grab index 0 instead of index i – Micka Dec 09 '15 at 12:00
  • @Micka If I somehow find a way to convert rgbaFrame to BGR, will I get (only) the red colored square in detectedBlobRgba.jpg? (This is my first time using OpenCV, so I am not sure how things work). – Solace Dec 09 '15 at 12:04
  • @Micka Oooh my bad! thank you for pointing it out, blue square is captured in detectedBlobRgb.jpg. Sorry for being stupid. – Solace Dec 09 '15 at 12:17
  • @Micka Can you write the gist of your comments as an answer, so that I can mark it as an accepted answer? – Solace Dec 09 '15 at 15:08

2 Answers2

4

Probably the range you are using is wrong for blue, In OpenCV the hue range is from 0-180 and you have given it's 170-270. Find the correct hue value for blue and use in inRange.

  1. http://answers.opencv.org/question/30547/need-to-know-the-hsv-value/#30564
  2. http://answers.opencv.org/question/28899/correct-hsv-inrange-values-for-red-objects/#28901

You can refer the answer here for choosing correct hsv value.

Below is the code for segmenting red color, check it with your code, and make sure it segmenting red object.

    Imgproc.cvtColor(rgbaFrame, hsv, Imgproc.COLOR_RGB2HSV,4); // Convert to hsv for color segmentation.      
    Core.inRange(hsv,new Scalar(0,50,40,0), new Scalar(10,255,255,0),thr);//upper red range of hue cylinder 
Haris
  • 13,645
  • 12
  • 90
  • 121
  • 2
    red hue range should have some part at the end of the hue range too (around value 180 in openCV). I would try openCV hues 110 - 125 for blue color as a starting range. – Micka Dec 09 '15 at 08:48
  • What is the 4th value in the scalars. HSV has three channels only – Solace Dec 09 '15 at 10:04
  • I just tested your range for red, edited the question to post the result. The questions remain (though the noise in the image seems to have reduced considerably): (1) Why is the red in the original image converted to bluish in `rgbaFrame.jpg`. (2) Why is the `submat` created from `ROI` stored in `detectedBlobRgba.jpg` so small? Shouldn't it be the red big blob? – Solace Dec 09 '15 at 10:41
  • @Micka Yes, but I tried to capture a screenshot image from the website colorizer.org and gave a custom value for HSV which falls in the range passed to `inRange`. You can see the first paragraph in my edit in the question. – Solace Dec 09 '15 at 10:44
4

There are multiple traps in converting an image to HSV color space and using HSV color space.

  1. OpenCV uses a compressed hue range because original, hue ranges from 0 to 360 which means that the values can't fit in 1 byte (values 0 to 255) while saturation and value channels are exactly covered by 1 byte. Therefore, OpenCV uses hue values divided by 2. So the hue channel will be covered by matrix entries between 0 and 180. Regarding this, your hue range from 170 to 270 should be divided by 2 = range 65 to 135 in OpenCV.

  2. hue tells you about the color tone, but saturation and value are still important to reduce noise, so set your threshold to some minimum saturation and value, too

  3. very important: OpenCV uses BGR memory ordering for rendering and image saving. This means that if your image has RGB(a) ordering and you save it without color conversion, you swap R and B channels, so assumed red color will become blue etc. Unfortunately normally you can't read from the image data itself, whether it is RGB or BGR ordered, so you should try to find it out from the image source. OpenCV allows several flags to convert either from RGB(A) to HSV and/or from BGR(A) to HSV, and/or from RGB to BGR etc, so that is no problem, as long as you know which memory format your image uses. However, displaying and saving always assumes BGR ordering, so if you want to display or save the image, convert it to BGR! HSV values however will be the same, no matter whether you convert a BGR image with BGR2HSV or whether you convert a RGB image with RGB2HSV. But it will have wrong values if you convert a BGR image with RGB2HSV or a RGB image with BGR2HSV... I'm not 100% sure about Java/Python/Android APIs of openCV, but your image really looks like B and R channels are swapped or misinterpreted (but since you use RGBA2HSV conversion that's no problem for the hsv colors).

about your contour extraction, there is a tiny (copy paste?) bug in your code that everyone might observe once in a while:

MatOfPoint largestContour = contours.get(0);
    double largestContourArea = Imgproc.contourArea(largestContour);
    for ( int i=1; i<contours.size(); ++i) {//NB Notice the prefix increment.
        // HERE you had MatOfPoint currentContour = contours.get(0); so you tested the first contour in each iteration
        MatOfPoint currentContour = contours.get(i);
        double currentContourArea = Imgproc.contourArea(currentContour);
        if (currentContourArea > largestContourArea) {
            largestContourArea = currentContourArea;
            largestContour = currentContour;
        }
    }

so probably just this has to be changed to use i instead of 0 in the loop

MatOfPoint currentContour = contours.get(i);
Micka
  • 19,585
  • 4
  • 56
  • 74