My goal is to write a custom camera view controller that:
- Can take photos in all four interface orientations with both the back and, when available, front camera.
- Properly rotates and scales the preview "video" as well as the full resolution photo.
- Allows a (simple) effect to be applied to BOTH the preview "video" and full resolution photo.
Implementation (on iOS 4.2 / Xcode 3.2.5):
Due to requirement (3), I needed to drop down to AVFoundation.
I started with Technical Q&A QA1702 and made these changes:
- Changed the sessionPreset to AVCaptureSessionPresetPhoto.
- Added an AVCaptureStillImageOutput as an additional output before starting the session.
The issue that I am having is with the performance of processing the preview image (a frame of the preview "video").
First, I get the UIImage result of imageFromSampleBuffer:
on the sample buffer from captureOutput:didOutputSampleBuffer:fromConnection:
. Then, I scale and rotate it for the screen using a CGGraphicsContext.
At this point, the frame rate is already under the 15 FPS that is specified in the video output of the session and when I add in the effect, it drops to under or around 10. Quickly the app crashes due to low memory.
I have had some success with dropping the frame rate to 9 FPS on the iPhone 4 and 8 FPS on the iPod Touch (4th gen).
I have also added in some code to "flush" the dispatch queue, but I am not sure how much it is actually helping. Basically, every 8-10 frames, a flag is set that signals captureOutput:didOutputSampleBuffer:fromConnection:
to return right away rather than process the frame. The flag is reset after a sync operation on the output dispatch queue finishes.
At this point I don't even mind the low frame rates, but obviously we can't ship with the low memory crashes. Anyone have any idea how to take action to prevent the low memory conditions in this case (and/or a better way to "flush" the dispatch queue)?