1

I'm on iOS 6 (7 too if you will and makes any difference) and GL ES 2.0. The idea is for a CAEAGLLayer to have a dynamic chain of shader-based filters that processes its contents property and displays the final result. Filters can be added / removed at any point in the chain.

So far I came up with an implementation, but I'm wondering if there's better ways to go about it. My implementation roughly goes about it this way:

  1. A base filter class from which concrete filters inherit, creating a shader program (vertex / fragment combo) for whatever filter / imaging they implement.
  2. A CAEAGLLayer subclass which implements the filter chain and to which filters are added. The high-level processing algorithm is:

     // 1 - Assume whenever the layer's content property is changed to an image, a copy of the image gets stored in a sourceImage property.
     // 2 - Assume changing the content property or adding / removing an image unit triggers this algorithm.
     // 3 - Assume the whole filter chain basically processes a quad with position and texture coordinates thru a VBO.
     // 4 - Assume all shader programs (by shader program I mean a vertex and fragment shader pair in a single program) have access to texture unit 0.
     // 5 - Assume P shader programs.
    
     load imageSource into a texture object bound to GL_TEXTURE2D and pointing to to GL_TEXTURE0
     attach bound texture object to GL_FRAMEBUFFER GL_COLOR_ATTACHMENT0 (so we are doing render-to-texture, which will be accessible to fragment shaders)
     for p = program identifier 0 up to P - 2:
        glUseProgram(p)
        glDrawArrays()
    
     attach GL_RENDERBUFFER to GL_FRAMEBUFFER GL_COLOR_ATTACHMENT0 (GL_RENDERBUFFER in turn has its storage set to the layer itself);
     p = program identifier P - 1 (last program in the chain)
     glUseProgram(p)
     glDrawArrays()
    
     present GL_RENDERBUFFER onscreen
    

This approach seems to work so far, but there's a number of things I'm wondering about:

Best way to implement adding / removing of filters:

Adding and removing programs seems the most logical approach right now. However this means one program per plugin and switching between all of these at render time. I wonder how these other approaches would compare:

  1. Attaching / detaching shader-pairs and re-linking a single composite program, instead of adding / removing programs. The OpenGL ES 2.0 Programming Guide says you cannot do it. However, since desktop GL allows for multiple shader objects in one program, I'm anyway curious if it would be a better approach if ES supported it.
  2. Keeping the filters in text format (their code within a function other than main) and instead compile them all into a monolithic shader pair (with an added main of course) each time a filter is added / removed.

Best way to implement per-filter caching:

Right now, adding / removing any number of filters at any point in the chain requires running all programs again to render the final image. It'd be nice however if I could somehow cache the output of each filter. That way, removing, adding or bypassing a filter would only require running the filters past the point of insertion / deletion / bypassing in the chain. I can think of a naive approach: on each program pass, bind a different texture object to GL_TEXTURE0 and to the GL_COLOR_ATTACHMENT0of the frame buffer. In this way I can keep the output of every filter around. However, creating a new texture, binding and changing the framebuffer attachment once per filter seems inefficient.

SaldaVonSchwartz
  • 3,769
  • 2
  • 41
  • 78

1 Answers1

1

I don't have much to say about the filter output caching problem, but as for filter switching... The EXT_separate_shader_objects extension is designed to solve this very problem, and it's supported on every device that runs iOS 5.0 or later. Here's a brief overview:

  1. There's a new convenience API for compiling shader programs that also takes care of making them "separable":

    _vertexProgram = glCreateShaderProgramvEXT(GL_VERTEX_SHADER, 1, &source);
    
  2. Program Pipeline Objects manage your program state and let you mix and match already-compiled shaders:

    GLuint _ppo;
    glGenProgramPipelinesEXT(1, &_ppo);
    glBindProgramPipelineEXT(_ppo);
    glUseProgramStagesEXT(_ppo, GL_VERTEX_SHADER_BIT_EXT, _vertexProgram);
    glUseProgramStagesEXT(_ppo, GL_FRAGMENT_SHADER_BIT_EXT, _fragmentProgram);
    
  3. Mixing and matching shaders can make attribute binding a pain, so you can specify that in the shader (likewise for varyings):

    #extension GL_EXT_separate_shader_objects : enable
    layout(location = 0) attribute vec4 position;
    layout(location = 1) attribute vec3 normal;
    
  4. Uniforms are set for the shader program they belong to:

    glProgramUniformMatrix3fvEXT(_vertexProgram, u_normalMatrix, 1, 0, _normalMatrix.m);
    
rickster
  • 124,678
  • 26
  • 272
  • 326