I'm using AVCaptureDevice to capture image from camera. After setting up session, and capturing image, I can get a NSData *
and a NSDictionary *
object, however, due to the separation of EXIF meta and data, when I tried to convert the NSData *
to UIImage *
, and put it in a UIImageView *
, the image is misoriented.
I know I can use a lot of fix methods to get the image right, and I'm doing it right now, but the thing is, when the image resolution is high (ie, 3000 * 4000), the fix method is slow, it takes around 500 milliseconds on my iPhone5. After running time profiler with instruments, the major time is spent on writing data, which is inevitable I guess.
So I'm wondering am I doing the right thing? Shouldn't there be a easier yet more efficient way to solve the problem like this? Please help me out of here, big thanks!