I'm looking at a game I'm working on in the "OpenGL ES Driver" template in Instruments. The sampler is showing that I'm spending nearly all my time in a function called gfxODataGetNewSurface
with a call tree that looks like this:
gfxIODataGetNewSurface
gliGetNewIOSurfaceES
_ZL29native_window_begin_iosurfaceP23_EAGLNativeWindowObject
usleep
__semwait_signal
(sorry for the weird formatting, safari or stack overflow is eating my line breaks)
The game is only getting about 40 FPS (on iPhone 4) under what I don't believe is a heavy workload which makes me think I'm doing something pathological with my OpenGL code.
Does anyone know what gliGetNewIOSurfaceES
/gfxIODataGetNewSurface
is doing? And what it indicates is happening in my app. Is it constantly creating new renderbuffers or something?
EDIT: New info...
I've discovered that with the following pixel shader:
varying vec2 texcoord;
uniform sampler2D sampler ;
const vec4 color = vec4(...);
void main()
{
gl_FragColor = color*texture2D(sampler,texcoord);
}
(yet again my formatting is getting mangled!)
If I change the const 'color' to a #define, the Renderer Utilization drops from 75% to 35% when drawing a full-screen (960x640) sprite to the screen. Really I want this color to be an interpolated 'varying' quantity from the vertex shader, but if making it a global constant kills performance I can't imagine there's any hope that the 'varying' version would be any better.