0

How can I rewrite Apple's GLCameraRipple example so that it doesn't require iOS 5.0?

I need to have it run on iOS 4.x, so I cannot use CVOpenGLESTextureCacheCreateTextureFromImage. What should I do?

As a follow on, I'm using the code below to provide YUV data rather than RGB, but the picture is not right, the screen is green. It seems as though UV plane doesn't work.

CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

// Create a new texture from the camera frame data, display that using the shaders
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_lumaTexture);
glBindTexture(GL_TEXTURE_2D, _lumaTexture);

glUniform1i(UNIFORM[Y], 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, 
             GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));

glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &_chromaTexture);
glBindTexture(GL_TEXTURE_2D, _chromaTexture);
glUniform1i(UNIFORM[UV], 1);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, 
             GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));

[self drawFrame];

glDeleteTextures(1, &_lumaTexture);
glDeleteTextures(1, &_chromaTexture);

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

How can I fix this?

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
user1278982
  • 223
  • 5
  • 15
  • I wonder why you like to support iOS4? Are you supporting the 3G model? – Nick Weaver Mar 19 '12 at 16:13
  • As a note, GLCameraRipple was using OpenGL ES 2.0, so I've rewritten your question to make it clear that you're asking about the iOS 5.0-specific fast texture upload capabilities. – Brad Larson Mar 19 '12 at 17:53

2 Answers2

3

If you switch the pixel format from kCVPixelFormatType_420YpCbCr8BiPlanarFullRange to kCVPixelFormatType_32BGRA (at line 315 of RippleViewController) then captureOutput:didOutputSampleBuffer:fromConnection: will receive a sample buffer in which the image buffer can be uploaded straight to OpenGL via glTexImage2D (or glTexSubImage2D if you want to keep your texture sized as a power of two). That works because all iOS devices to date support the GL_APPLE_texture_format_BGRA8888 extension, allowing you to specify an otherwise non-standard format of GL_BGRA.

So you'd create a texture somewhere in advance with glGenTextures and replace line 235 with something like:

glBindTexture(GL_TEXTURE_2D, myTexture);

CVPixelBufferLockBaseAddress(pixelBuffer);
glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
      GL_BGRA, GL_UNSIGNED_BYTE, 
      CVPixelBufferGetBaseAddress(pixelBuffer));

CVPixelBufferUnlockBaseAddress(pixelBuffer);

You may want to check that the result of CVPixelBufferGetBytesPerRow is four times the result of CVPixelBufferGetWidth; I'm uncertain from the documentation whether it's guaranteed always to be (which, pragmatically, probably means that it isn't), but as long as it's a multiple of four you can just supply CVPixelBufferGetBytesPerRow divided by four as your pretend width, given that you're uploading a sub image anyway.

EDIT: in response to the follow-on question posted below as a comment, if you wanted to stick with receiving frames and making them available to the GPU in YUV then the code becomes visually ugly because what you're returned is a structure pointing to the various channel components but you'd want something like this:

// lock the base address, pull out the struct that'll show us where the Y
// and CbCr information is actually held
CVPixelBufferLockBaseAddress(pixelBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *info = CVPixelBufferGetBaseAddress(imageBuffer);

// okay, upload Y. You'll want to communicate this texture to the
// SamplerY uniform within the fragment shader.
glBindTexture(GL_TEXTURE_2D, yTexture);

uint8_t *yBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoY.offset);
uint32_t yRowBytes = EndianU32_BtoN(info->componentInfoY.rowBytes);

/* TODO: check that yRowBytes is equal to CVPixelBufferGetWidth(pixelBuffer);
   otherwise you'll need to shuffle memory a little */

glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer),
      GL_LUMINANCE, GL_UNSIGNED_BYTE, 
      yBaseAddress);

// we'll also need to upload the CbCr part of the buffer, as a two-channel
// (ie, luminance + alpha) texture. This texture should be supplied to
// the shader for the SamplerUV uniform.
glBindTexture(GL_TEXTURE_2D, uvTexture);

uint8_t *uvBaseAddress = (uint8_t *)info + EndianU32_BtoN(info->componentInfoCbCr.offset);
uint32_t uvRowBytes = EndianU32_BtoN(info->componentInfoCbCr.rowBytes);

/* TODO: a check on uvRowBytes, as above */

glTexSubImage2D(GL_TEXTURE_2D, 0,
      0, 0,
      CVPixelBufferGetWidth(pixelBuffer)/2, CVPixelBufferGetHeight(pixelBuffer)/2,
      GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 
      uvBaseAddress);

CVPixelBufferUnlockBaseAddress(pixelBuffer);
Tommy
  • 99,986
  • 12
  • 185
  • 204
  • For the record, this turns out to be almost exactly the same answer as Brad's but half a minute later. I'm taking that as a compliment. I'm going to leave it here though, in case my line number references into the nominated Apple project are of any help. – Tommy Mar 19 '12 at 18:14
  • Thanks to your answer, bu if I want to use kCVPixelFormatType_420YpCbCr8BiPlanarFullRange without using iOS 5.0-specific features, how should I do? – user1278982 Mar 20 '12 at 08:27
  • Having poked into the shaders, it looks like Apple are already doing a YUV to RGB conversion in there, so you'd upload the Y and UV components as two separate textures (luminance and luminance+alpha respectively, I'd imagine). Give me a few minutes and I'll try to update my answer accordingly... – Tommy Mar 20 '12 at 20:12
  • `glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); `Aren't these code needed? What's the differences between your code and mine? – user1278982 Mar 21 '12 at 03:02
  • I'm using `glTexSubImage2D` to dump the new data into a subsection of a power-of-two texture (force of habit really; they certainly used to be faster but I haven't benchmarked it on a modern iOS device), though probably texture parameters should be set only when the texture is created, and I'm conveniently assuming the textures are created once and then merely updated whereas your code created them afresh every time. Other than that I don't see any significant differences, and your use of `CVPixelBufferGetBaseAddressOfPlane` is probably even stylistically preferable to my version. – Tommy Mar 21 '12 at 18:06
2

The iOS 5.0 fast texture upload capabilities can make for very fast uploading of camera frames and extraction of texture data, which is why Apple uses them in their latest sample code. For camera data, I've seen 640x480 frame upload times go from 9 ms to 1.8 ms using these iOS 5.0 texture caches on an iPhone 4S, and for movie capturing I've seen more than a fourfold improvement when switching to them.

That said, you still might want to provide a fallback for stragglers who have not yet updated to iOS 5.x. I do this in my open source image processing framework by using a runtime check for the texture upload capability:

+ (BOOL)supportsFastTextureUpload;
{
    return (CVOpenGLESTextureCacheCreate != NULL);
}

If this returns NO, I use the standard upload process that we have had since iOS 4.0:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);

CVPixelBufferLockBaseAddress(cameraFrame, 0);

glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

// Do your OpenGL ES rendering here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

GLCameraRipple has one quirk in its upload process, and that is the fact that it uses YUV planar frames (split into Y and UV images), instead of one BGRA image. I get pretty good performance from my BGRA uploads, so I haven't seen the need to work with YUV data myself. You could either modify GLCameraRipple to use BGRA frames and the above code, or rework what I have above into YUV planar data uploads.

Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • Comparing your answer to mine, frustratingly 27 seconds later, is it your understanding that image buffers will always be tightly packed, i.e. so that `CVPixelBufferGetBytesPerRow` is equal to `CVPixelBufferGetWidth` * 4 for this particular pixel format? If so can you provide a reference — I'd love to have an authoritative answer to that, that I can provide to others. Apart from that is there any reason you don't unlock the base address until after the ES rendering? Or, from the other point of view, is there any reason to care when as long as you do it? – Tommy Mar 19 '12 at 18:12
  • @Tommy - Those are pretty good questions. Honestly, the unlock could probably come earlier for the case where you use `glTexImage2D()` for the upload. For the texture caches, I found that I needed to lock around access to this texture data if the rendering was performed on a different thread than the AVFoundation camera frame callbacks. As for the tight packing, that was my understanding for video frames, but it does sound like image preview frames might be different: http://stackoverflow.com/questions/6540710/ios-cvimagebuffer-distorted-from-avcapturesessiondataoutput-with-avcapturesessio – Brad Larson Mar 19 '12 at 18:21