I'm rendering an OpenGL scene to a texture and try to save the rendered result to a UIImage which then gets saved to the camera roll.
I'm using Apple's method to create a UIImage from this texture. This works fine but eats a lot of memory while executing this code.
When my app is running and rendering it uses about 1MB of memory space. When the glReadPixels
method kicks in the memory usage jumps to 32 MB and I get a memory warning.
When the glReadPixels
is ready executing my memory falls back to 1MB.
Are there better ways to read the pixels? I'm trying to maintain compatibility with IOS versions prior to iOS5 so I'm currently not using the texturecache method mentioned here.