8

I tried google and stackoverflow but I cant seem to find the oficial documentation for functions that start with CVOpenGLESTexture. I can see they are from core Video, and I know they were added on iOS 5 but searching the documentation doesnt give me anything.

I am looking for the information about the parameters, what they do, how to use them etc. like in the other apple frameworks.

So far all I can do is command click on it to see the information but this feels super weird. Or is there a way to add this so it can be displayed on the quick help on the right on xcode?

Thanks and sorry if it is a stupid question.

PD: The core Video reference guide doesnt seem to explain these either.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
Pochi
  • 13,391
  • 3
  • 64
  • 104

2 Answers2

16

Unfortunately, there really isn't any documentation on these new functions. The best you're going to find right now is in the CVOpenGLESTextureCache.h header file, where you'll see a basic description of the function parameters:

/*!
    @function   CVOpenGLESTextureCacheCreate
    @abstract   Creates a new Texture Cache.
    @param      allocator The CFAllocatorRef to use for allocating the cache.  May be NULL.
    @param      cacheAttributes A CFDictionaryRef containing the attributes of the cache itself.   May be NULL.
    @param      eaglContext The OpenGLES 2.0 context into which the texture objects will be created.  OpenGLES 1.x contexts are not supported.
    @param      textureAttributes A CFDictionaryRef containing the attributes to be used for creating the CVOpenGLESTexture objects.  May be NULL.
    @param      cacheOut   The newly created texture cache will be placed here
    @result     Returns kCVReturnSuccess on success
*/
CV_EXPORT CVReturn CVOpenGLESTextureCacheCreate(
                    CFAllocatorRef allocator,
                    CFDictionaryRef cacheAttributes,
                    void *eaglContext,
                    CFDictionaryRef textureAttributes,
                    CVOpenGLESTextureCacheRef *cacheOut) __OSX_AVAILABLE_STARTING(__MAC_NA,__IPHONE_5_0);

The more difficult elements are the attributes dictionaries, which unfortunately you need to find examples of in order to use these functions properly. Apple has the GLCameraRipple and RosyWriter examples that show off how to use the fast texture upload path with BGRA and YUV input color formats. Apple also provided the ChromaKey example at WWDC (which may still be accessible along with the videos) that demonstrated how to use these texture caches to pull information from an OpenGL ES texture.

I just got this fast texture uploading working in my GPUImage framework (the source code for which is available at that link), so I'll lay out what I was able to parse out of this. First, I create a texture cache using the following code:

CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &coreVideoTextureCache);
if (err) 
{
    NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}

where the context referred to is an EAGLContext configured for OpenGL ES 2.0.

I use this to keep video frames from the iOS device camera in video memory, and I use the following code to do this:

CVPixelBufferLockBaseAddress(cameraFrame, 0);

CVOpenGLESTextureRef texture = NULL;
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);

if (!texture || err) {
    NSLog(@"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);  
    return;
}

outputTexture = CVOpenGLESTextureGetName(texture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

// Do processing work on the texture data here

CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

CVOpenGLESTextureCacheFlush(coreVideoTextureCache, 0);
CFRelease(texture);
outputTexture = 0;

This creates a new CVOpenGLESTextureRef, representing an OpenGL ES texture, from the texture cache. This texture is based on the CVImageBufferRef passed in by the camera. That texture is then retrieved from the CVOpenGLESTextureRef and appropriate parameters set for it (which seemed to be necessary in my processing). Finally, I do my work on the texture and clean up when I'm done.

This fast upload process makes a real difference on the iOS devices. It took the upload and processing of a single 640x480 frame of video on an iPhone 4S from 9.0 ms to 1.8 ms.

I've heard that this works in reverse, as well, which might allow for the replacement of glReadPixels() in certain situations, but I've yet to try this.

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • It is indeed a shame that such an efficient way for image processing is not very well documented. But thanks to people like you less experience developers can try to use this as well. – Pochi Mar 06 '12 at 01:32
  • Were you able to get texture caches working for the photo preset? I could not find any info anywhere on `CFDictionaryRef cacheAttributes` for `CVOpenGLESTextureCacheCreate` – Dex Apr 04 '12 at 08:05
  • @Dex - Yes, this does work for the photo preset with the same settings as I've used for video frames. See my above-linked framework for the code with the options I've used. One caution is that on the iPhone 4, the camera can capture larger frames than the maximum texture size on that device. These photos can't be processed directly as textures, so some sort of tiling will be needed to deal with them. I'm working on that. – Brad Larson Apr 04 '12 at 13:02
  • @BradLarson Do you know if it's possible to persist `outputTexture`? I'm trying to cache it temporarily to use in another method without having to do any extra copies to an extra texture. Not releasing it is a start, but doesn't seem to work still. – Dex Jul 27 '12 at 22:38
  • @Dex - You'll need to copy the data within it somewhere if you want it to persist beyond the current frame of video, because it will be overwritten when the next camera frame comes in. One fast way to do this would be to render a passthrough shader on a quad to a texture-backed FBO. – Brad Larson Jul 28 '12 at 01:01
  • @BradLarson I'm using a mutex to make sure that no new frames are read into the texture cache. But there are still some issues. I can't tell if something internally is clearing out the cache or if its a scope issue or what. Copying to a new texture would definitely work. – Dex Jul 28 '12 at 01:10
2

Apple has finally posted the documentation, a little over a week ago.

pfleiner
  • 92
  • 4
  • 2
    That's not documentation. That's someone at Apple copy/pasting the method names, and adding spaces in between the camel-case words. The header source-code comments are actually more useful than this, IME - they have more detail! – Adam Jan 05 '14 at 22:52