3

I took a sample from OpenCV sources and tried to put it in use on iOS, I did the following:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {

  // get cv::Mat from CMSampleBufferRef

  UIImage * img = [self imageFromSampleBuffer: sampleBuffer];  
  cv::Mat cvImg = [img CVGrayscaleMat];

  cv::HOGDescriptor hog;
  hog.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
  cv::vector<cv::Rect> found;     

  hog.detectMultiScale(cvImg, found, 0.2, cv::Size(8,8), cv::Size(16,16), 1.05, 2);


  for( int i = 0; i < (int)found.size(); i++ )
  {

    cv::Rect r = found[i];

    dispatch_async(dispatch_get_main_queue(), ^{
      self.label.text = [NSString stringWithFormat:@"Found at %d, %d, %d, %d", r.x, r.y, r.width, r.height];
    });

    NSLog(@"Found at %d, %d, %d, %d", r.x, r.y, r.width, r.height);         

  }
}

where CVGrayscaleMat was

-(cv::Mat)CVGrayscaleMat
{
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;

    cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to backing data
                                                    cols,                      // Width of bitmap
                                                    rows,                     // Height of bitmap
                                                    8,                          // Bits per component
                                                    cvMat.step[0],              // Bytes per row
                                                    colorSpace,                 // Colorspace
                                                    kCGImageAlphaNone |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
    CGContextRelease(contextRef);
    CGColorSpaceRelease(colorSpace);

    return cvMat;
}

and imageFromSampleBuffer was sample from Apple's docs. The thing is - the app can not detect people, I tried different sizes and poses - nothing works for me. What am I missing?

fspirit
  • 2,537
  • 2
  • 19
  • 27

1 Answers1

3

I've managed to make it work. It turns out, that CV_8UC1 matrix is not the right one, although openCV doesn't tell, that something is wrong, when I pass it to detectMultiScale method. When I convert CV_8UC4 to CV_8UC3 with

-(cv::Mat) CVMat3Channels
{
  cv::Mat rgbaMat = [self CVMat];

  cv::Mat rgbMat(self.size.height, self.size.width, CV_8UC3); // 8 bits per component, 3 channels

  cvtColor(rgbaMat, rgbMat, CV_RGBA2RGB, 3);

  return rgbMat;
}

detection starts to work.

fspirit
  • 2,537
  • 2
  • 19
  • 27
  • Were you able to get decent detection? For me, the above method has a lot of false detection and a very low frame rate. – timemanx Aug 19 '13 at 09:59