7

I'm trying to detect the edges of a business card (and draw them) with an iPhone camera, using openCV. I'm new to this framework, as well as Computer Vision or C++.

I'm trying to use the solution explained here: https://stackoverflow.com/a/14123682/3708095, which github project is https://github.com/foundry/OpenCVSquares

It works with a predefined image, but I'm trying to get it working with the camera.

To do so, I'm using the CvVideoCameraDelegate protocol implementing it in CVViewController.mm like they explain in http://docs.opencv.org/doc/tutorials/ios/video_processing/video_processing.html like this:

#ifdef __cplusplus
-(void)processImage:(cv::Mat &)matImage
{
//NSLog (@"Processing Image");
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{

    matImage = CVSquares::detectedSquaresInImage(matImage, self.tolerance, self.threshold, self.levels, [self accuracy]);

    UIImage *image = [[UIImage alloc]initWithCVMat:matImage orientation:UIImageOrientationDown];

    dispatch_async(dispatch_get_main_queue(), ^{
        self.imageView.image = image;
    });
});

}
#endif

EDIT:

If I do it like this, it gives me a EXC_BAD_ACCESS...

If I clone the matImage before processing it, by logging it, it seems to process the image and even find rectangles, but the rectangle is not drawn to the image output to the imageView.

cv::Mat temp = matImage.clone();    

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{

    UIImage *image = [[UIImage alloc]CVSquares::detectedSquaresInImage(temp, self.tolerance, self.threshold, self.levels, [self accuracy])
                                       orientation:UIImageOrientationDown];

    dispatch_async(dispatch_get_main_queue(), ^{
        self.imageView.image = image;
    });
});

I'm pretty sure I'm missing something, probably because I'm not passing correctly some object, ot pointer to object or so, and the object that I need to be modified is not.

Anyway, If this is not the right approach, I would really appreciate a tutorial or example where they do something like this, either using openCV or GPUImage (not familiar with it either)...

Community
  • 1
  • 1
jdev
  • 569
  • 5
  • 25

1 Answers1

0

So the solution was actually pretty simple...

Instead of trying to use matImage to set the imageView.image, it just needed to transform matImage to be actually modified in the imageView since the CvVideoCamera was already initialized with (and linked to) the imageView:

self.videoCamera = [[CvVideoCamera alloc]initWithParentView:self.imageView];

finally the function was like this:

#ifdef __cplusplus
-(void)processImage:(cv::Mat &)matImage
{
    matImage = CVSquares::detectedSquaresInImage(matImage, self.angleTolerance, self.threshold, self.levels, self.accuracy);
}
#endif
jdev
  • 569
  • 5
  • 25