0

In the color blob detection sample of OpenCV4Android, in ColorBlobDetectionActivity.java, they have a method onTouch which (in its start) detects the color of the part of the screen where the user has touched - as it is receiving the information about which part of the screen is touched in MouseEvent event argument.

I want to write a similar method, in which the output is simply the HSV value of the blob (like the part of screen touched by user in this sample app), but I do not want the user to indicate by touching the screen, I rather want the program to detect blobs of different colors (on plain background, e.g. white background) automatically.

For example, in the following image, the program should be able to detect the position of red and green blobs automatically (rather than the user indication by touching the blob) and then calculate the HSV (or RGB from which I will calculate HSV) value of that blob.

enter image description here

I am sure this should be possible using OpenCV4Android. The question is how? What steps should be followed (or what methods from API should be used)?

RELEVANT SNIPPER FROM ColorBlobDetectionActivity.java :

public boolean onTouch(View v, MotionEvent event) {
        int cols = mRgba.cols();
        int rows = mRgba.rows();

        int xOffset = (mOpenCvCameraView.getWidth() - cols) / 2;
        int yOffset = (mOpenCvCameraView.getHeight() - rows) / 2;

        int x = (int)event.getX() - xOffset;
        int y = (int)event.getY() - yOffset;

        Log.i(TAG, "Touch image coordinates: (" + x + ", " + y + ")");

        if ((x < 0) || (y < 0) || (x > cols) || (y > rows)) return false;

        Rect touchedRect = new Rect();

        touchedRect.x = (x>4) ? x-4 : 0;
        touchedRect.y = (y>4) ? y-4 : 0;

        touchedRect.width = (x+4 < cols) ? x + 4 - touchedRect.x : cols - touchedRect.x;
        touchedRect.height = (y+4 < rows) ? y + 4 - touchedRect.y : rows - touchedRect.y;

        Mat touchedRegionRgba = mRgba.submat(touchedRect);

        Mat touchedRegionHsv = new Mat();
        Imgproc.cvtColor(touchedRegionRgba, touchedRegionHsv, Imgproc.COLOR_RGB2HSV_FULL);

        // Calculate average color of touched region
        mBlobColorHsv = Core.sumElems(touchedRegionHsv);

        ...

EDIT:

inRange part:

On the statement Utils.matToBitmap(rgbaFrame, bitmap);, I am getting the following exception:

enter image description here

In this snippet, rgbaFrame is the Mat which is returned from onCameraFrame, which represents a camera frame (and which is mRgba in the color blob detection sample whose github link is in the question)

private void detectColoredBlob () { 
     Mat hsvImage = new Mat(); 
     Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_RGB2HSV_FULL);

     Mat maskedImage = new Mat(); 
     Scalar lowerThreshold = new Scalar(100, 120, 120); 
     Scalar upperThreshold = new Scalar(179, 255, 255); 
     Core.inRange(hsvImage, lowerThreshold, upperThreshold, maskedImage);

     Mat dilatedMat= new Mat(); 
         //List<MatOfPoint> contours = new ArrayList<>(); 
         List<MatOfPoint> contours = new ArrayList<MatOfPoint>(); 
         Mat outputHierarchy = new Mat(); 
         Imgproc.dilate(maskedImage, dilatedMat, new Mat() ); 
         Imgproc.findContours(dilatedMat, contours, outputHierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE); 

         Log.i(TAG, "IPAPP detectColoredBlob() outputHierarchy " + outputHierarchy.toString());

         /*for ( int contourIndex=0; contourIndex < contours.size(); contourIndex++ ) {
            //if(contours.get(contourIndex).size()>100) { //ERROR The operator > is undefined for the argument type(s) Size, int
                 Imgproc.drawContours ( rgbaFrame, contours, contourIndex, new Scalar(0, 255, 0), 4); 
            //} 
         }*/

     Bitmap bitmap = Bitmap.createBitmap(rgbaFrame.rows(), rgbaFrame.cols(), Bitmap.Config.ARGB_8888); 
     Utils.matToBitmap(maskedImage, bitmap); 
     imageView.setImageBitmap(bitmap); 
}
Solace
  • 8,612
  • 22
  • 95
  • 183

1 Answers1

1

For each possible blob color, you can do an inRange() operation to give a binary (black&white) mat of pixels in that range.

Color detection in opencv

Then you can find the blobs with findContours(). You can look at each contour's size to see if it meets some size/area threshold you set.

-- edit --

If the background is always going to be white, a simpler approach would be to immediately convert the image to grayscale, and then binary. See Java OpenCV: How to convert a color image into black and white?.

Then you could find contours, and then go back to the original image and compute the average color found inside the contours you found.

The tricky part would be setting the binary threshold value just right so the color blobs show up correctly in the B&W images.

Community
  • 1
  • 1
medloh
  • 941
  • 12
  • 33
  • Thank you. Can you be a little simpler in your jargon? I could not understand this. The question you pointed to is in C++, I am using OpenCV4Android. I'll be very thankful for your guidance. – Solace Dec 02 '15 at 19:10
  • Here's a link with some java code: http://stackoverflow.com/questions/28570088/opencv-java-inrange-function. Once you have the inRange() part working and and can produce a binary mat of each desired color range, I'd suggest searching on how to find and use contours with opencv java. There are tons of good posts on stackoverflow about opencv contours using java. – medloh Dec 03 '15 at 17:24
  • Hey I wrote the code and showed images in [this question](http://stackoverflow.com/questions/34170856/trying-to-detect-blue-color-from-image-using-opencv-and-getting-unexpected-resu). Can you have a look? – Solace Dec 09 '15 at 04:45
  • 1
    Sorry, I have nothing to add. Looks like you got some very good answers in that thread. Nice progress. – medloh Dec 09 '15 at 16:03