2

I know that Core Image on iOS 5.0 supports facial detection (another example of this), which gives the overall location of a face, as well as the location of eyes and a mouth within that face.

However, I'd like to refine this location to detect the position of a mouth and teeth within it. My goal is to place a mouth guard over a user's mouth and teeth.

Is there a way to accomplish this on iOS?

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
DeviPhone26
  • 615
  • 2
  • 10
  • 16
  • Editing questions to make them more readable and help the community is one thing, but completely re-writing them and adding sentences that didn't exist is another. @Brad Larson adding 'i know that core image on iOS 5 supports facial detection'. The user may know this, but how can you presume this. The only reason I point this out, is because I mentioned this fact in my answer and it makes my answer look like I didn't read the question at all! – bandejapaisa Jun 16 '12 at 17:29
  • @bandejapaisa - The asker linked to a tutorial about Core Image on iOS 5.0 (stated right in the title of the linked blog post), so I wanted to express what was at that link in case it went dead at some point in the future. They then indicated that this was insufficient for their needs, so I clarified the question to indicate why Core Image was not a good solution. I feel that this was a justified addition, given the contents of the links they present. – Brad Larson Jun 17 '12 at 00:31

1 Answers1

9

I pointed in my blog that tutorial has something wrong.

Part 5) Adjust For The Coordinate System: Says you need to change window's and images's coordinates but that is what you shouldn't do. You shouldn't change your views/windows (in UIKit coordinates) to match CoreImage coordinates as in the tutorial, you should do the other way around.

This is the part of code relevant to do that:
(You can get whole sample code from my blog post or directly from here. It contains this and other examples using CIFilters too :D )

// Create the image and detector
CIImage *image = [CIImage imageWithCGImage:imageView.image.CGImage];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                          context:...
                                          options:...];

// CoreImage coordinate system origin is at the bottom left corner and UIKit's
// is at the top left corner. So we need to translate features positions before
// drawing them to screen. In order to do so we make an affine transform
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform,
                                       0, -imageView.bounds.size.height);

// Get features from the image
NSArray *features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features) {

    // Get the face rect: Convert CoreImage to UIKit coordinates
    const CGRect faceRect = CGRectApplyAffineTransform(
                              faceFeature.bounds, transform);

    // create a UIView using the bounds of the face
    UIView *faceView = [[UIView alloc] initWithFrame:faceRect];

    ...

    if(faceFeature.hasMouthPosition) {
        // Get the mouth position translated to imageView UIKit coordinates
        const CGPoint mouthPos = CGPointApplyAffineTransform(
                                   faceFeature.mouthPosition, transform);
        ...
    }
}

Once you get the mouth position (mouthPos) you simply place your thing on or near it.

This certain distance could be calculated experimentally and must be relative to the triangle formed by the eyes and the mouth. I would use a lot of faces to calculate this distance if possible (Twitter avatars?)

Hope it helps :)

nacho4d
  • 43,720
  • 45
  • 157
  • 240