8

I'm trying to use the AVMetadataFaceObject to get the yaw and roll of a face in a video. From what I can tell, the precision of the yaw is in increments of 45 degrees and the roll is in increments of 30 degrees.

Is there a way to increase this precision?

(Code as seen in Proper usage of CIDetectorTracking).

Community
  • 1
  • 1
Liron
  • 2,012
  • 19
  • 39
  • What ever made you think that it was that inaccurate? The docs make it seem as though you get a floating-point precision value from 0-90 in decimal increments. – CodaFi Jun 14 '13 at 00:02
  • That's what I would expect, but when I log the output values, it jumps in 45 degree increments. So if my head is vertical, it says 0. As I start to tilt my head it stays at 0, and after it reaches a certain tilt, it jumps. – Liron Jun 16 '13 at 11:06

1 Answers1

1

You can get the rectangles of the eyes and calculate the angle yourself. You should investigate the changes made here in iOS 7, as there are many improvements in this area.

Cocoanetics
  • 8,171
  • 2
  • 30
  • 57
  • I'll definitely check out what ios7 has to offer. – Liron Aug 21 '13 at 11:19
  • 4
    How do you get the eye positions? From what I see in the docs, the AVMetadataFaceObject only has the yaw/pitch and face bounds. You can use the CIDetector or something like that to do more accurate face detection, but that uses more CPU and makes my app laggy. – Liron Aug 21 '13 at 11:24
  • This is old an old thread: but you get eye position from Core Graphics face detection. It is slower than AVF, but gives more detail (eyes and mouth) and precision. – Stan James Jul 12 '16 at 12:31