I need to write OpenGL ES 2-dimensional renderer on iOS. It should draw some primitives such as lines and polygons into 2d image (it will be rendering of vector map). Which way is the best for getting image from OpenGL context in that task? I mean, should I render these primitives into texture and then get image from it, or what? Also, it will be great if someone give examples or tutorials which look like the thing I need (2d GL rendering into image). Thanks in advance!
1 Answers
If you need to render an OpenGL ES 2-D scene, then extract an image of that scene to use outside of OpenGL ES, you have two main options.
The first is to simply render your scene and use glReadPixels()
to grab RGBA data for the scene and place it in a byte array, like in the following:
GLubyte *rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)currentFBOSize.width, (int)currentFBOSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
// Do something with the image
free(rawImagePixels);
The second, and much faster, way of doing this is to render your scene to a texture-backed framebuffer object (FBO), where the texture has been provided by iOS 5.0's texture caches. I describe this approach in this answer, although I don't show the code for raw data access there.
You do the following to set up the texture cache and bind the FBO texture:
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &rawDataTextureCache);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}
// Code originally sourced from http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferCreate(kCFAllocatorDefault,
(int)imageSize.width,
(int)imageSize.height,
kCVPixelFormatType_32BGRA,
attrs,
&renderTarget);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
rawDataTextureCache, renderTarget,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)imageSize.width,
(int)imageSize.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
and then you can just read directly from the bytes that back this texture (in BGRA format, not the RGBA of glReadPixels()
) using something like:
CVPixelBufferLockBaseAddress(renderTarget, 0);
_rawBytesForImage = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
// Do something with the bytes
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
However, if you just want to reuse your image within OpenGL ES, you just need to render your scene to a texture-backed FBO and then use that texture in your second level of rendering.
I show an example of rendering to a texture, and then performing some processing on it, within the CubeExample sample application within my open source GPUImage framework, if you want to see this in action.

- 1
- 1

- 170,088
- 45
- 397
- 571
-
@medvedNick - `glReadPixels()` works on all OS versions, but the texture caches were only first introduced in iOS 5.0. They are much, much faster, but are 5.0-only. You can do a runtime check, though, and fall back to the slower `glReadPixels()` on older OS versions. Again, check my GPUImage code, because I do this in there. – Brad Larson May 04 '12 at 20:55
-
Texture caches are 5.0 and above, but aren't they fast only on certain devices? e.g. Fast on 4S/iPad2 and equivalent to glReadPixels/glTexImage2D everywhere else – Rhythmic Fistman May 05 '12 at 06:56
-
@RhythmicFistman - They are faster than `glReadPixels()` on every device running 5.0 in my benchmarks, if used properly. You avoid an expensive color conversion and directly access the raw pixel data in memory for a cached texture. I've seen significant benefits on iPad 1 and iPhone 4 (I don't have a 3G S running 5.0, but I've heard good things there, too). – Brad Larson May 05 '12 at 13:47
-
No accidental colour conversion is definitely an API bonus. I tried texture caches as a replacement for glTexImage2D in a video player & confusingly the results were indistinguishable from the traditional implementation, not sure what went wrong yet. However I'm confused - if you can directly access the memory of a texture, surely you have hardware support? In any case I plan to revisit them in v1.1, there must be performance gains in there somewhere. – Rhythmic Fistman May 07 '12 at 15:20
-
@RhythmicFistman - For uploads, you usually specify BGRA as the color format anyway, so you wouldn't have noticed an image color difference between the two approaches. With an AVAssetReader, I still noticed a significant speedup with the caches, although not as great as when dealing with the live camera feeds. On the iOS devices, the GPU shares memory with the system, and I believe Apple is just accessing this using the private (on iOS) IOSurface framework. All 5.0-supporting devices work with this. – Brad Larson May 07 '12 at 18:47
-
In a 2011 WWDC video in which caches were used on an AVCaptureSession video feed (session 419?) I thought they said texture caches only worked on the iPad2, but if they work everywhere then that's great news! I'll dust that code off and re-measure. Now that there are these 60fps movs floating around I need all the performance I can get. Thanks! p.s. I'm using YUV. – Rhythmic Fistman May 08 '12 at 05:19
-
@BradLarson I was wrong, it was WWDC2011 session 414 ~28:00 & and the time to use YUV data with texture caches you had to use the iPad2-exclusive 1 & 2 channel red & red-green extensions. That seems to have changed! Measuring time. – Rhythmic Fistman May 08 '12 at 09:32
-
Profiling gives me confusing results: while my texture upload times go down, my h264 decode times go up! Using texture age to control cache size helps, but then memory profiling is difficult as tex memory doesn't seem to be charged to any Instruments visible process. Could be worth a TSI. – Rhythmic Fistman May 12 '12 at 07:24
-
@BradLarson thanks for giving some hints. But I try your ways above, the screen is black, while I save the renderTarget as a picture, the picture is showing right... It is some amazing... can you help me ? – CPT Jul 15 '14 at 06:40
-
@BradLarson there is one difference between mine and yours. I replace `[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context]` with `EAGLContext currentContext]`, because there is no GPUImageOpenGLESContext in my project. Also I have refer your `GPUImage Project`, but I want to use my native code to implement. So any help is welcome. – CPT Jul 15 '14 at 06:44
-
@BradLarson is there a way to get RGBA pixels using texture caches? I need a fast way to downsample, crop and pass RGBA pixels of camera to a computer vision library. – grisevg Nov 22 '16 at 18:25
-
@grisevg - An easy way to do this would be to use a render pass with a color-swizzling shader. I've done this before to extract RGBA values and it's fast. – Brad Larson Nov 22 '16 at 19:32
-
@BradLarson makes sense, the OpenGL/CoreVideo wouldn't care about swizzled channels. Do you know if it's possible to convert to 8bit monochrome texture? Or this has to be done on the CPU? – grisevg Nov 28 '16 at 17:22
-
@grisevg - It's possible, but you need a custom fragment and vertex shader to pack luminance from four adjacent pixels into the RGBA channels of an output pixel. Most likely would only work well on multiple-of-four image widths. – Brad Larson Nov 28 '16 at 17:29
-
@BradLarson found a way, you can use `GL_R8_EXT` and `GL_RED_EXT` to store and get only a single channel. Cheers for the help. – grisevg Dec 05 '16 at 19:40