2

I've been struggling to translate the CIDetector (face detection) results into coordinates relative to the UIImageView displaying the image so I can draw the coordinates using CGPaths.

I've looked at all questions here and all the tutorials I could find and most of them use small images that are not scaled when displayed in a UIImageView (example). The problem I am having is with using large images which are scaled using aspectFit when displayed in a UIImageView and determining the correct scale + translation values.

I am getting inconsistent results when testing with images of different sizes/aspect ratios, so I think my routine is flawed. I'd been struggling with this for a while so if anyone has some tips or can x-ray what I am doing wrong, that would be a great help.

What I am doing:

  • get the face coordinates
  • use the frameForImage routine below (found here on SO) to get the scale and bounds of the UIImageView image
  • create transform for scale + translation
  • apply transform to the CIDetector result

// my routine for determining transform values

NSDictionary* data = [self frameForImage:self.imageView.image inImageViewAspectFit:self.imageView];

CGRect scaledImageBounds = CGRectFromString([data objectForKey:@"bounds"]);
float scale = [[data objectForKey:@"scale"] floatValue];

CGAffineTransform transform = CGAffineTransformMakeScale(scale, -scale);

transform = CGAffineTransformTranslate(transform, 
          scaledImageBounds.origin.x / scale, 
          -(scaledImageBounds.origin.y / scale + scaledImageBounds.size.height / scale));

CIDetector results transformed using:

     mouthPosition = CGPointApplyAffineTransform(mouthPosition, transform);

// example of bad result: scale seems incorrect

enter image description here

// routine below found here on SO for determining bound for image scaled in UIImageView using 'aspectFit`

-(NSDictionary*)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)myImageView
{
    float imageRatio = image.size.width / image.size.height;
    float viewRatio = myImageView.frame.size.width / myImageView.frame.size.height;

    float scale;
    CGRect boundingRect;
    if(imageRatio < viewRatio)
    {
        scale = myImageView.frame.size.height / image.size.height;
        float width = scale * image.size.width;
        float topLeftX = (myImageView.frame.size.width - width) * 0.5;
        boundingRect = CGRectMake(topLeftX, 0, width, myImageView.frame.size.height);
    }
    else
    {
        scale = myImageView.frame.size.width / image.size.width;
        float height = scale * image.size.height;
        float topLeftY = (myImageView.frame.size.height - height) * 0.5;
        boundingRect = CGRectMake(0, topLeftY, myImageView.frame.size.width, height);
    }

    NSDictionary * data = [NSDictionary dictionaryWithObjectsAndKeys:
                           [NSNumber numberWithFloat:scale], @"scale",
                           NSStringFromCGRect(boundingRect), @"bounds",
                           nil];

    return data;
}
Brad Larson
  • 170,088
  • 45
  • 397
  • 571
spring
  • 18,009
  • 15
  • 80
  • 160

2 Answers2

4

I completely understand what you are trying to do, but let me offer you a different way to achieve what you want.

  • you have an over sized image
  • you know the size of the imageView
  • ask the image for its CGImage
  • determine a 'scale' factor, which is the imageView width divided by the image width
  • multiple this value and your image height, then subtract the result from the imageViewHeight, to get the "empty" height in the imageView, lets call this 'fillHeight'
  • divide 'fillHeight' by 2 and round to get the 'offset' value used below
  • using context provided by UIGraphicsBeginImageContextWithOptions(imageView.size, NO, 0), paint the background whatever color you want, then draw your CGImage

    CGContextDrawImage (context, CGRectMake(0, offset, imageView.size.width, rintf( image.size.height*scale )), [image CGImage]);

  • get this new image using:

    UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image;

  • set the image: imageView.image = image;

Now you can exactly map back to your image as you know the EXACT scaling ratio and offsets.

David H
  • 40,852
  • 12
  • 92
  • 138
  • Hey thanks -It works! that's some "drive around it" type thinking! I'd like to have been able to sort out the translation issues but have spent too much time on it already. – spring Aug 30 '12 at 23:26
  • Hi @David, hope you can help, I am struggling to work on the above with Aspect Fill. Thanks you for your time , this is my recent question http://stackoverflow.com/questions/36099735/how-to-use-cidetector-and-cifacefeature-with-large-images-and-different-aspect-r – Tal Zion Mar 19 '16 at 09:14
  • 1
    @TalZion you can calculate exactly what Apple is doing to fill the view (that is, the magnification and the offsets from the view, and apply it. Take a deep breath, draw it out on a pad of paper, then figure it out. Its just math. – David H Mar 19 '16 at 21:54
0

This might be the simple answer you are looking for. If you're x and y coordinates are inverted, you can mirror them yourself. In the below snippet im looping through my returned features and needing to invert the y coordinates, and x coordinate if it's front-facing camera:

    for (CIFaceFeature *f in features) {
        float newy = -f.bounds.origin.y + self.frame.size.height - f.bounds.size.height;

        float newx = f.bounds.origin.x;
        if( isMirrored ) {
            newx = -f.bounds.origin.x + self.frame.size.width - f.bounds.size.width;
        }
        [[soups objectAtIndex:rnd] drawInRect:CGRectMake(newx, newy, f.bounds.size.width, f.bounds.size.height)];
    }
chrisallick
  • 1,330
  • 17
  • 18