0

Can somebody tell me if it is possible to use full precision floating point 2DTextures on the iPad2? (full precision = single precision)

By printing out the implemented OpenGL extensions on the iPad2 using

glGetString(GL_EXTENSIONS)

I figured out that both OES_texture_half_float and OES_texture_float are supported.

However, using GL_HALF_FLOAT_OES as the textures type works fine,

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_HALF_FLOAT_OES, NULL);

whereas using GL_FLOAT results in an incomplete framebuffer object.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, NULL);

Am I making something wrong here or are full precision floating point textures just not supported?

Thank u in advance.

RayDeeA
  • 271
  • 3
  • 12

1 Answers1

1

The OES_texture_float extension provides for 32-bit floating point textures to be used as inputs, but that doesn't mean that you can render into them. The EXT_color_buffer_half_float adds the capability for iOS devices (I believe A5 GPUs and higher) to render into 16-bit half float textures, but not 32-bit full float ones.

I don't believe that any of the current iOS devices allow for rendering into full 32-bit float textures, just to use them as inputs when rendering a scene.

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • Thank you for your answer. I was rendering a summed area table (SAT) and got a big precision problem using the recursive doubling approach, which needs a previously rendered texture and uses it as an input. Since I already lost precision the resulting values are not correct anymore. – RayDeeA Jan 15 '13 at 19:15
  • @RayDeeA - I had thought generating integral images would be too much for OpenGL ES devices to handle. Are you thinking something like this technique: http://www.shaderwrangler.com/publications/sat/SAT_EG2005.pdf might be practical on these devices? If so, I'd be interested in hearing about your efforts so far, because I might have applications for this myself. – Brad Larson Jan 15 '13 at 21:10
  • How would you use a 32-bit floating point texture as an input when u don't have a chance to generate one before? – RayDeeA Jan 16 '13 at 15:50
  • It is not that fast. My implementation takes approx 0.28 seconds for a 2205 * 1537 texture, but I think there is enough room for further optimization. The result looks really nice on the simulator, but has some overflowed pixels on the actual device. – RayDeeA Jan 16 '13 at 15:54
  • Do you think it would be possible to use an unsigned int texture and pack the float values into unsigned int and unpack them at the end? – RayDeeA Jan 16 '13 at 16:18
  • Or maybe do some kind of bitwise operations to split the highp value into 2 16 bit values, then "draw" the values into a double sized texture next to each other and after the calculation merge them? – RayDeeA Jan 16 '13 at 18:07
  • @RayDeeA - Do you need to maintain values for each color channel, or do you only care about luminance? If the latter, you could pack your values into a 32-bit integer that spans the four 8-bit color channels. In regards to sourcing 32-bit float textures, you can provide those yourself (from images or other local data) and upload that to the GPU for use. – Brad Larson Jan 16 '13 at 22:43
  • Very good idea, unfortunately I need to process all 4 color channels independently, but what I could do is split all 4 channels and process each one seperately with the method u described. Im sure I would loose performance and there is another problem: Seems like there is no possibility of using unsigned int textures on the OpenGL ES 2.0 implementation that the iPad2 currently uses. Unsigned byte works perfectly, which meens it results in a complete FBO, but even the half_float has more bits to offer. I think the DepthBuffer has more precision (32Bit). Maybe thats the way to do it. – RayDeeA Jan 17 '13 at 16:58
  • @RayDeeA - Except that [you can't write per-fragment depth values using OpenGL ES](http://stackoverflow.com/questions/4534467/writing-texture-data-onto-depth-buffer/4596314#4596314), so you might be limited in how you could use the depth buffer there. – Brad Larson Jan 17 '13 at 17:01