3

I'm trying to modify the GLCameraRipple sample application from Apple to process video frames on a background thread. In this example, it handles each frame on the main thread using the following code:

// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

If I change this code to process in a background thread:

dispatch_queue_t videoQueue = dispatch_queue_create("com.test.queue", NULL);
[dataOutput setSampleBufferDelegate:self queue:videoQueue];

then program crashes.

When I try to create a second EAGLContext with sharing, as specified in Apple's documentation, then I only see a green or black screen.

How can I modify this sample application to run on a background thread?

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
dimaxyu
  • 223
  • 1
  • 9
  • What's the error that you get when your app crashes? – Mihai Fratu Feb 03 '12 at 14:59
  • By what means should the image get to the screen? Are you following the rules (ie, flush before, pass across, flush afterwards) for passing named GL resources between different contexts in the same share group? – Tommy Feb 03 '12 at 16:58

1 Answers1

2

This was actually fairly interesting, after I tinkered with the sample. The problem here is with the CVOpenGLESTextureCacheCreateTextureFromImage() function. If you look at the console when you get the green texture, you'll see something like the following being logged:

Error at CVOpenGLESTextureCacheCreateTextureFromImage -6661

-6661, according to the headers (the only place I could find documentation on these new functions currently), is a kCVReturnInvalidArgument error. Something's obviously wrong with one of the arguments to this function.

It turns out that it is the CVImageBufferRef that is the problem here. It looks like this is being deallocated or otherwise changed while the block that handles this texture cache update is happening.

I tried a few ways of solving this, and ended up using a dispatch queue and dispatch semaphore like I describe in this answer, having the delegate still call back on the main thread, and within the delegate do something like the following:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection
{
    if (dispatch_semaphore_wait(frameRenderingSemaphore, DISPATCH_TIME_NOW) != 0)
    {
        return;
    }

    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    CFRetain(pixelBuffer);

    dispatch_async(openGLESContextQueue, ^{
        [EAGLContext setCurrentContext:_context];

        // Rest of your processing

        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        CFRelease(pixelBuffer);

        dispatch_semaphore_signal(frameRenderingSemaphore);
    });
}

By creating the CVImageBufferRef on the main thread, locking the bytes it points to, and retaining it, then handing it off to the asynchronous block, that seems to fix this error. A full project that shows this modification can be downloaded from here.

I should say one thing here: this doesn't appear to gain you anything. If you look at the way that the GLCameraRipple sample is set up, the heaviest operation in the application, the calculation of the ripple effect, is already dispatched to a background queue. This is also using the new fast upload path for providing camera data to OpenGL ES, so that's not a bottleneck here when run on the main thread.

In my Instruments profiling on a dual-core iPhone 4S, I see no significant difference in rendering speed or CPU usage between the stock version of this sample application and my modified one that runs the frame upload on a background queue. Still, it was an interesting problem to diagnose.

Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • Hello Brad, I have a question regarding this solution. Will this approach be useful in the case where I want to not only use the camera as a texture for an opengl object but also while using this same sample buffer to do some OpenCV operations? Would you recommend using the texture somehow instead? Ideally I think using OpenGL to do some image tracking would be the best but programming it on shaders is beyond my experience. Thanks – Pochi Aug 20 '12 at 09:38