0

I'm rendering an OpenGL scene to a texture and try to save the rendered result to a UIImage which then gets saved to the camera roll.

I'm using Apple's method to create a UIImage from this texture. This works fine but eats a lot of memory while executing this code.

When my app is running and rendering it uses about 1MB of memory space. When the glReadPixels method kicks in the memory usage jumps to 32 MB and I get a memory warning. When the glReadPixels is ready executing my memory falls back to 1MB.

Are there better ways to read the pixels? I'm trying to maintain compatibility with IOS versions prior to iOS5 so I'm currently not using the texturecache method mentioned here.

Community
  • 1
  • 1
polyclick
  • 2,704
  • 4
  • 32
  • 58
  • It's probably not `glReadPixels()` that's the problem here, it's the fact that you have to allocate a buffer of sufficient size to hold your entire image. This will take width * height * 4 bytes, unless you go the texture cache route, in which case memory can be shared between your FBO texture and your UIImage's backing bytes. – Brad Larson Sep 28 '12 at 15:48
  • I'm implementing the texture cache method but this give me errors on iPhone 4 + IOS6. Which you can read here. http://stackoverflow.com/questions/12675655/cvopenglestexturecachecreatetexturefromimage-fails-to-create-iosurface Furthermore, I'll probably stick with glReadPixels on older devices/ios versions. – polyclick Oct 01 '12 at 15:25

0 Answers0