In the color blob detection sample of OpenCV4Android, in ColorBlobDetectionActivity.java, they have a method onTouch
which (in its start) detects the color of the part of the screen where the user has touched - as it is receiving the information about which part of the screen is touched in MouseEvent event
argument.
I want to write a similar method, in which the output is simply the HSV
value of the blob (like the part of screen touched by user in this sample app), but I do not want the user to indicate by touching the screen, I rather want the program to detect blobs of different colors (on plain background, e.g. white background) automatically.
For example, in the following image, the program should be able to detect the position of red and green blobs automatically (rather than the user indication by touching the blob) and then calculate the HSV
(or RGB
from which I will calculate HSV
) value of that blob.
I am sure this should be possible using OpenCV4Android. The question is how? What steps should be followed (or what methods from API should be used)?
RELEVANT SNIPPER FROM ColorBlobDetectionActivity.java :
public boolean onTouch(View v, MotionEvent event) {
int cols = mRgba.cols();
int rows = mRgba.rows();
int xOffset = (mOpenCvCameraView.getWidth() - cols) / 2;
int yOffset = (mOpenCvCameraView.getHeight() - rows) / 2;
int x = (int)event.getX() - xOffset;
int y = (int)event.getY() - yOffset;
Log.i(TAG, "Touch image coordinates: (" + x + ", " + y + ")");
if ((x < 0) || (y < 0) || (x > cols) || (y > rows)) return false;
Rect touchedRect = new Rect();
touchedRect.x = (x>4) ? x-4 : 0;
touchedRect.y = (y>4) ? y-4 : 0;
touchedRect.width = (x+4 < cols) ? x + 4 - touchedRect.x : cols - touchedRect.x;
touchedRect.height = (y+4 < rows) ? y + 4 - touchedRect.y : rows - touchedRect.y;
Mat touchedRegionRgba = mRgba.submat(touchedRect);
Mat touchedRegionHsv = new Mat();
Imgproc.cvtColor(touchedRegionRgba, touchedRegionHsv, Imgproc.COLOR_RGB2HSV_FULL);
// Calculate average color of touched region
mBlobColorHsv = Core.sumElems(touchedRegionHsv);
...
EDIT:
inRange part:
On the statement Utils.matToBitmap(rgbaFrame, bitmap);
, I am getting the following exception:
In this snippet, rgbaFrame
is the Mat which is returned from onCameraFrame
, which represents a camera frame (and which is mRgba
in the color blob detection sample whose github link is in the question)
private void detectColoredBlob () {
Mat hsvImage = new Mat();
Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_RGB2HSV_FULL);
Mat maskedImage = new Mat();
Scalar lowerThreshold = new Scalar(100, 120, 120);
Scalar upperThreshold = new Scalar(179, 255, 255);
Core.inRange(hsvImage, lowerThreshold, upperThreshold, maskedImage);
Mat dilatedMat= new Mat();
//List<MatOfPoint> contours = new ArrayList<>();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat outputHierarchy = new Mat();
Imgproc.dilate(maskedImage, dilatedMat, new Mat() );
Imgproc.findContours(dilatedMat, contours, outputHierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
Log.i(TAG, "IPAPP detectColoredBlob() outputHierarchy " + outputHierarchy.toString());
/*for ( int contourIndex=0; contourIndex < contours.size(); contourIndex++ ) {
//if(contours.get(contourIndex).size()>100) { //ERROR The operator > is undefined for the argument type(s) Size, int
Imgproc.drawContours ( rgbaFrame, contours, contourIndex, new Scalar(0, 255, 0), 4);
//}
}*/
Bitmap bitmap = Bitmap.createBitmap(rgbaFrame.rows(), rgbaFrame.cols(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(maskedImage, bitmap);
imageView.setImageBitmap(bitmap);
}