This was actually fairly interesting, after I tinkered with the sample. The problem here is with the CVOpenGLESTextureCacheCreateTextureFromImage()
function. If you look at the console when you get the green texture, you'll see something like the following being logged:
Error at CVOpenGLESTextureCacheCreateTextureFromImage -6661
-6661, according to the headers (the only place I could find documentation on these new functions currently), is a kCVReturnInvalidArgument
error. Something's obviously wrong with one of the arguments to this function.
It turns out that it is the CVImageBufferRef
that is the problem here. It looks like this is being deallocated or otherwise changed while the block that handles this texture cache update is happening.
I tried a few ways of solving this, and ended up using a dispatch queue and dispatch semaphore like I describe in this answer, having the delegate still call back on the main thread, and within the delegate do something like the following:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if (dispatch_semaphore_wait(frameRenderingSemaphore, DISPATCH_TIME_NOW) != 0)
{
return;
}
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CFRetain(pixelBuffer);
dispatch_async(openGLESContextQueue, ^{
[EAGLContext setCurrentContext:_context];
// Rest of your processing
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CFRelease(pixelBuffer);
dispatch_semaphore_signal(frameRenderingSemaphore);
});
}
By creating the CVImageBufferRef on the main thread, locking the bytes it points to, and retaining it, then handing it off to the asynchronous block, that seems to fix this error. A full project that shows this modification can be downloaded from here.
I should say one thing here: this doesn't appear to gain you anything. If you look at the way that the GLCameraRipple sample is set up, the heaviest operation in the application, the calculation of the ripple effect, is already dispatched to a background queue. This is also using the new fast upload path for providing camera data to OpenGL ES, so that's not a bottleneck here when run on the main thread.
In my Instruments profiling on a dual-core iPhone 4S, I see no significant difference in rendering speed or CPU usage between the stock version of this sample application and my modified one that runs the frame upload on a background queue. Still, it was an interesting problem to diagnose.