7

I am currently attempting to draw an image in openGL using YUV420 format (bi-planar). I receive raw data, and am attempting to parse it into a CVPixelBuffer, and then pass said buffer using CVOpenGLESTextureCacheCreateTextureFromImage. While I receive no errors when parsing into the CVPixelBuffer, I receive an error (-6683) when trying to pass into CVOpenGLESTextureCacheCreateTextureFromImage. I'm trying my best to follow apple's GLCameraRipple sample code - except again, I'm using raw image data instead of data from the camera.

Hopefully someone can explain what it is that I'm missing here - I assume it's a missing attribute...

FYI, plane 0 is the Y plane and plane 1 is the UV plane - where the UV plane should be half the width and height of the Y plane.

size_t numPlanes = image->GetNumPlanes();
size_t planeWidth[numPlanes];
size_t planeHeight[numPlanes];
size_t scanWidth[numPlanes];
void *planeIndex[numPlanes];
for(int i = 0; i<numPlanes; i++){
    i<1 ? planeWidth[i] = image->GetWidth() : planeWidth[i] = image->GetWidth()/2;
    i<1 ? planeHeight[i] = image->GetHeight() : planeWidth[i] = image->GetHeight()/2;
    scanWidth[i] = image->GetScanWidth(i);
    planeIndex[i] = image->GetPlanePointer(i);
}

CVPixelBufferRef pixelBuffer;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault,
                           NULL,
                           NULL,
                           0,
                           &kCFTypeDictionaryKeyCallBacks,
                           &kCFTypeDictionaryValueCallBacks);

attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                  1,
                                  &kCFTypeDictionaryKeyCallBacks,
                                  &kCFTypeDictionaryValueCallBacks);

CFDictionarySetValue(attrs, kCVPixelBufferIOSurfacePropertiesKey, empty);



CVReturn cvError = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
                                                      image->GetWidth(),
                                                      image->GetHeight(),
                                                      kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                                      nil,
                                                      nil,
                                                      numPlanes,
                                                      planeIndex,
                                                      planeWidth,
                                                      planeHeight,
                                                      scanWidth,
                                                      nil, nil, attrs, &pixelBuffer);
if(cvError) NSLog(@"Error at CVPixelBufferCreateWithPlanarBytes:  %d", cvError);

CVReturn err;
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);

if (!_videoTextureCache)
{
    NSLog(@"No video texture cache");
    return;
}

if (_bModel == nil ||
    width != _textureWidth ||
    height != _textureHeight)
{
    _textureWidth = width;
    _textureHeight = height;

    _bModel = [[BufferModel alloc] initWithScreenWidth:_screenWidth
                                          screenHeight:_screenHeight
                                            meshFactor:_meshFactor
                                          textureWidth:_textureWidth
                                         textureHeight:_textureHeight];

    [self setupBuffers];
}

[self cleanUpTextures];

// CVOpenGLESTextureCacheCreateTextureFromImage will create GLES texture
// optimally from CVImageBufferRef.

// Y-plane
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                   _videoTextureCache,
                                                   pixelBuffer,
                                                   NULL,
                                                   GL_TEXTURE_2D,
                                                   GL_RED_EXT,
                                                   _textureWidth,
                                                   _textureHeight,
                                                   GL_RED_EXT,
                                                   GL_UNSIGNED_BYTE,
                                                   0,
                                                   &_lumaTexture);
if (err)
{
    NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}

Thank you to anyone able to offer assistance. And while I'm aware there is an issue similar (not quite the same), said issue is also quite old and never received any responses. I'm hoping for more luck for my situation.

Doc
  • 1,480
  • 2
  • 16
  • 28

3 Answers3

4

The iosurface property is null in the CVPixelBuffer you've created.

Created manually:

<CVPixelBuffer 0x1fd52790 width=1280 height=720 pixelFormat=420v iosurface=0x0 planes=2>

Created by CMSampleBufferGetImageBuffer:

<CVPixelBuffer 0x1fd521e0 width=1280 height=720 pixelFormat=420f iosurface=0x21621c54 planes=2>

To my knowledge there is no solution.

junglecat
  • 643
  • 1
  • 10
  • 19
  • Thanks for the reply! Is there any way to work around the issue? Or do I just have to scrap this whole idea is impossible? – Doc Oct 01 '12 at 13:57
  • Interestingly, if I use CVPixelBufferCreate, with the same allocator, width and height, format, attributes, and pixelbuffer, I do not have the same issue - the IOSurface is set properly. Unfortunately, doing this doesn't set the image data, and I can't seem to find anything in the API to let me go in and set it after creating the CVPixelBuffer... – Doc Oct 01 '12 at 15:19
  • Check out my topic, I think it's a unique combo between iPhone 4 and IOS6.0. http://stackoverflow.com/questions/12675655/cvopenglestexturecachecreatetexturefromimage-fails-to-create-iosurface – polyclick Oct 01 '12 at 15:28
  • 1
    I've been seeking resolution to this problem for a year... :-( Also it is not related to any specific hardware revision or OS version. – junglecat Oct 01 '12 at 16:13
  • 2
    Guess I'm gonna have to go back to the drawing board then. Was really hoping this would make for a nice solution to drawing YUV images via openGL w/o having to convert to RGB first =/ – Doc Oct 01 '12 at 17:36
  • YUV to RGB on the CPU is too expensive for iOS devices. I think we'll have to resort to straight up OpenGL using Shaders. – junglecat Oct 02 '12 at 05:35
  • 3
    How did you get the formatted output for the pixel buffer (` – Robert Aug 11 '14 at 12:38
  • https://developer.apple.com/library/archive/qa/qa1781/_index.html you need set `kCVPixelBufferIOSurfacePropertiesKey` when create pixelbuffer – fengxing Nov 05 '18 at 11:03
1

Use CVPixelBufferCreate if you are going to use the CVPixelBufferRef with OpenGL. It creates an iosurface for you, unlike the WithBytes alternatives. The downside is that you can't reuse your existing buffers. You'll have to copy the data from your existing buffers into the newly allocated buffers.

// set pixel buffer attributes so we get an iosurface
NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
                                       [NSDictionary dictionary], kCVPixelBufferIOSurfacePropertiesKey,
                                       nil];

// create planar pixel buffer
CVPixelBufferRef pixelBuffer = nil;
CVPixelBufferCreate(kCFAllocatorDefault, bufferYUV.width, bufferYUV.height, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, (CFDictionaryRef)pixelBufferAttributes, &pixelBuffer);

// lock pixel buffer
CVPixelBufferLockBaseAddress(pixelBuffer, 0);

// get image details
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);

// get plane addresses
unsigned char *baseAddressY  = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
unsigned char *baseAddressUV = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);

//TODO: copy your data buffers to the newly allocated memory locations

// unlock pixel buffer address
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

// intialize buffers if not already initialized (see GLCameraRipple example)
if (!_buffersInitialized)
{
    [self initializeBuffersWithTextureWidth:width textureHeight:height];
}

// always clean up last textures
CVReturn err;
[self cleanUpTextures];

// Y-plane
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RED_EXT, width, height, GL_RED_EXT, GL_UNSIGNED_BYTE, 0, &_lumaTexture);
if (err)
{
    NSLog(@"Could not create Y texture from image. %d", err);
}

glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

// UV-plane
glActiveTexture(GL_TEXTURE1);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RG_EXT, width / 2, height / 2, GL_RG_EXT, GL_UNSIGNED_BYTE, 1, &_chromaTexture);
if (err)
{
    NSLog(@"Could not create UV texture from image. %d", err);
}

glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Anton
  • 4,554
  • 2
  • 37
  • 60
  • I'm new to video processing and trying do something similar to your answer (following the CGCamerRipple example). But I'm not sure how to copy the data from my existing buffers into the newly allocated buffers? For instance.. how do you know the newly allocated buffers are large enough? Also, I see you use a baseAddressUV buffer.. how does that correlate to the U plane and the V plane buffers? – dchappelle Feb 12 '13 at 22:01
  • You can use memcpy to copy the data into the new buffers - just make sure you take stride into account! CVPixelBufferCreate ensures that the allocated buffers are large enough if you provided the correct width and height values, but again, make sure you pay attention to the stride of your input planes and the stride of the planes created by CVPixelBufferCreate. The UV plane should consiste of interleaved UV data (UVUVUVUV...). – Anton Feb 12 '13 at 22:59
  • If your input has U and V data on separate planes, you must interleave them before using the CVPixelBuffer. See http://stackoverflow.com/questions/14567786/fastest-de-interleave-operation-in-c if you need a fast interleave/deinterleave algorithm. – Anton Feb 12 '13 at 23:00
  • I believe you can also specify the stride of the planes allocated by CVPixelBufferCreate by using kCVPixelBufferBytesPerRowAlignmentKey in the pixelBufferAttributes dictionary. – Anton Feb 12 '13 at 23:02
  • Can't you just use kCVPixelFormatType_420YpCbCr8Planar if you have U and V data on separate planes? – ninjudd Apr 05 '15 at 18:23
0

I don't try the following approach on YUV, but it works on RGB case

https://developer.apple.com/library/ios/qa/qa1781/_index.html

add __bridge before CFDictionaryRef if ARC enabled.

willSapgreen
  • 1,365
  • 2
  • 17
  • 27