If I understand your question (and not sure I do) then the built-in face detection in iOS 5 is the way to go. It is fast and very easy to use. When a face is detected in an image, you get CGPoints for the left and right eyes, and the mouth. You also get a CGRect representing a bounding box for the found face. From this you should be able to position your eyeglass images.
There are a bunch of tutorials out there but a most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?