The code I describe in the answer you link to describes the creation of a pixel buffer, the extraction of its matching texture, and the binding of that texture as the output of a framebuffer. You use that code once to set up the framebuffer you will render your scene to.
Whenever you want to capture from that texture, you'll probably want to use glFinish()
to block until all OpenGL ES rendering has completed, then use the code I describe there:
CVPixelBufferLockBaseAddress(renderTarget, 0);
_rawBytesForImage = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
// Do something with the bytes
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
to extract the raw bytes for the texture containing the image of your scene.
The internal byte ordering of the iOS textures is BGRA, so you'll want to use something like the following to create a CGImageRef from those bytes:
// It appears that the width of a texture must be padded out to be a multiple of 8 (32 bytes) if reading from it using a texture cache
NSUInteger paddedWidthOfImage = CVPixelBufferGetBytesPerRow(renderTarget) / 4.0;
NSUInteger paddedBytesForImage = paddedWidthOfImage * (int)currentFBOSize.height * 4;
dataProvider = CGDataProviderCreateWithData((__bridge_retained void*)self, _rawBytesForImage, paddedBytesForImage, dataProviderUnlockCallback);
cgImageFromBytes = CGImageCreate((int)currentFBOSize.width, (int)currentFBOSize.height, 8, 32, CVPixelBufferGetBytesPerRow(renderTarget), defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
In the above, I use a dataProviderUnlockCallback()
function to handle the unlocking of the pixel buffer and safe resumption of rendering, but you can probably ignore that in your case and just pass in NULL for the parameter there.