3

My goal is to write a custom camera view controller that:

  1. Can take photos in all four interface orientations with both the back and, when available, front camera.
  2. Properly rotates and scales the preview "video" as well as the full resolution photo.
  3. Allows a (simple) effect to be applied to BOTH the preview "video" and full resolution photo.

Implementation (on iOS 4.2 / Xcode 3.2.5):

Due to requirement (3), I needed to drop down to AVFoundation.

I started with Technical Q&A QA1702 and made these changes:

  1. Changed the sessionPreset to AVCaptureSessionPresetPhoto.
  2. Added an AVCaptureStillImageOutput as an additional output before starting the session.

The issue that I am having is with the performance of processing the preview image (a frame of the preview "video").

First, I get the UIImage result of imageFromSampleBuffer: on the sample buffer from captureOutput:didOutputSampleBuffer:fromConnection:. Then, I scale and rotate it for the screen using a CGGraphicsContext.

At this point, the frame rate is already under the 15 FPS that is specified in the video output of the session and when I add in the effect, it drops to under or around 10. Quickly the app crashes due to low memory.

I have had some success with dropping the frame rate to 9 FPS on the iPhone 4 and 8 FPS on the iPod Touch (4th gen).

I have also added in some code to "flush" the dispatch queue, but I am not sure how much it is actually helping. Basically, every 8-10 frames, a flag is set that signals captureOutput:didOutputSampleBuffer:fromConnection: to return right away rather than process the frame. The flag is reset after a sync operation on the output dispatch queue finishes.

At this point I don't even mind the low frame rates, but obviously we can't ship with the low memory crashes. Anyone have any idea how to take action to prevent the low memory conditions in this case (and/or a better way to "flush" the dispatch queue)?

gerry3
  • 21,420
  • 9
  • 66
  • 74

2 Answers2

4

To prevent the memory issues, simply create an autorelease pool in captureOutput:didOutputSampleBuffer:fromConnection:.

This makes sense since imageFromSampleBuffer: returns an autoreleased UIImage object. Plus it frees up any autoreleased objects created by image processing code right away.

// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
fromConnection:(AVCaptureConnection *)connection
{ 
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    // Create a UIImage from the sample buffer data
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];

    < Add your code here that uses the image >

    [pool release];
}

My testing has shown that this will run without memory warnings on an iPhone 4 or iPod Touch (4th gen) even if requested FPS is very high (e.g. 60) and image processing is very slow (e.g. 0.5+ secs).

OLD SOLUTION:

As Brad pointed out, Apple recommends image processing be on a background thread so as to not interfere with the UI responsiveness. I didn't notice much lag in this case, but best practices are best practices, so use the above solution with autorelease pool instead of running this on the main dispatch queue / main thread.

To prevent the memory issues, simply use the main dispatch queue instead of creating a new one.

This also means that you don't have to switch to the main thread in captureOutput:didOutputSampleBuffer:fromConnection: when you want to update the UI.

In setupCaptureSession, change FROM:

// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);

TO:

// we want our dispatch to be on the main thread
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
gerry3
  • 21,420
  • 9
  • 66
  • 74
  • Yes, I'm guessing this is causing the session to just drop frames when things can't keep up. However, I believe that Apple discourages this because of the impact it can have on the responsiveness of the interface (because of the processing you're doing on the main thread). It still seems like there should be a way to process the frames on a non-main queue without being overloaded. – Brad Larson Feb 04 '11 at 22:26
  • Well, it's actually responsive enough so far, but I might try to kick off the actual processing into the background after `captureOutput:didOutputSampleBuffer:fromConnection:` and then come back to the main thread to update the UI. This approach does fix the issue and this is what Apple's GLVideoFrame example from WWDC does (of course, the OpenGL processing is very fast). – gerry3 Feb 05 '11 at 05:56
  • Doing the processing on the background dropped the frame rate to unacceptable levels and made the app seem significantly less responsive even if the UI was slightly more responsive. Doing everything on the main thread seems like the best approach in this case. – gerry3 Feb 07 '11 at 19:47
  • Actually, just adding an autorelease seems to have solved the memory issues while allowing the processing to remain on a background queue / thread. I have updated the answer. – gerry3 Feb 07 '11 at 22:11
  • @gerry3 - Were you then seeing errors on the console about a missing autorelease pool, or was it that you needed to drain the pool more frequently? – Brad Larson Feb 07 '11 at 22:33
  • @Brad The latter. Not sure why the caller doesn't drain the pool every frame. – gerry3 Feb 08 '11 at 03:40
2

A fundamentally better approach would be to use OpenGL to handle as much of the image-related heavy lifting for you (as I see you're trying in your latest attempt). However, even then you might have issues with building up frames to be processed.

While it seems strange that you'd be running into memory accumulation when processing frames (in my experience, you just stop getting them if you can't process them fast enough), Grand Central Dispatch queues can get jammed up if they are waiting on I/O.

Perhaps a dispatch semaphore would let you throttle the addition of new items to the processing queues. For more on this, I highly recommend Mike Ash's "GCD Practicum" article, where he looks at optimizing an I/O bound thumbnail processing operation using dispatch semaphores.

Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • OpenGL was providing a great frame rate, but that wasn't at the correct preset (although the Photo preset video frames are only slightly higher res than the example). Yes, strange, but easy to reproduce. You can grab Apple's sample code, throw it in a new iPhone app, add a short delay in `captureOutput:didOutputSampleBuffer:fromConnection:` and watch it crash in seconds. One difference between this and the OpenGL example that I haven't explored yet is that the latter used the main dispatch queue (main thread) which may ultimately be similar to a semaphore approach. I will be looking into both. – gerry3 Feb 04 '11 at 08:58
  • It turned out to be as simple as using the main dispatch queue (see my answer). – gerry3 Feb 04 '11 at 21:48
  • The best solution so far has been to add an autorelease pool. I have updated my answer. – gerry3 Feb 07 '11 at 22:12