10

I'm drawing planets in OpenGL ES, and running into some interesting performance issues. The general question is: how best to render "hugely detailed" textures on a sphere?

(the sphere is guaranteed; I'm interested in sphere-specific optimizations)

Base case:

  • Window is approx. 2048 x 1536 (e.g. iPad3)
  • Texture map for globe is 24,000 x 12,000 pixels (an area half the size of USA fits the full width of screen)
  • Globe is displayed at everything from zoomed in (USA fills screen) to zoomed out (whole globe visible)
  • I need a MINIMUM of 3 texture layers (1 for the planet surface, 1 for day/night differences, 1 for user-interface (hilighting different regions)
  • Some of the layers are animated (i.e. they have to load and drop their texture at runtime, rapidly)

Limitations:

  • top-end tablets are limited to 4096x4096 textures
  • top-end tablets are limited to 8 simultaneous texture units

Problems:

  • In total, it's naively 500 million pixels of texture data
  • Splitting into smaller textures doesn't work well because devices only have 8 units; with only a single texture layer, I could split into 8 texture units and all textures would be less than 4096x4096 - but that only allows a single layer
  • Rendering the layers as separate geometry works poorly because they need to be blended using fragment-shaders

...at the moment, the only idea I have that sounds viable is:

  1. split the sphere into NxM "pieces of sphere" and render each one as separate geometry
  2. use mipmaps to render low-res textures when zoomed out
  3. ...rely on simple culling to cut out most of them when zoomed in, and mipmapping to use small(er) textures when they can't be culled

...but it seems there ought to be an easier way / better options?

Adam
  • 32,900
  • 16
  • 126
  • 153
  • Would using a sphere impostor help here: http://stackoverflow.com/questions/10488086/drawing-a-sphere-in-opengl-es/10506172#10506172 ? That dramatically simplifies the geometry, but requires a lookup function in the fragment shader to map the square texture to the spherical surface. It also provides something that is perfectly smooth at all zoom scales. – Brad Larson Apr 06 '13 at 16:15
  • Doesn't that mean throwing away all of OpenGL and writing a software raytracing library? You say "a sphere looks pretty much the same from every angle" but in fact it looks uniquely different from EVERY angle - this is texture-mapped! – Adam Apr 06 '13 at 16:39
  • 1
    An untextured sphere does look identical from any direction, which is what lets you get away with dropping geometry and calculating one set of per-pixel normals, heights, etc. For texturing, the rotation of the sphere in a given frame, combined with the pixel location, can be fed into a per-pixel texture mapping function in the fragment shader. You'll note that all of the above in my answer is done in OpenGL ES, so you're throwing nothing away, aside from your geometry generation. I've done this before for texturing spheres in this manner and it works well. – Brad Larson Apr 06 '13 at 16:46
  • OK, but how does this ("fed into a per-pixel texture mapping function") impact performance? Apologies if I'm not getting this, but it seems like I'd just be re-implementing the concept of model + view + projection matrices + all the onboard T&L + texture-lookup ... inside a fragment shader. Which, because it bypasses the vertex shader, is surely going to be as-slow-or-slower? – Adam Apr 06 '13 at 17:36
  • ...ah, except: the calculations would all be simplified because we know everything is on a sphere. So maybe that makes up for what we're losing? – Adam Apr 06 '13 at 17:38
  • Right, you don't need to deal with the complete complexity for transformation. Looking at the function I have here, I multiply an inverse MVP matrix for the sphere with the normal at each position on the surface of the sphere, then use a simple two-case lookup function for the texture coordinate I need in a source rectangle. The `SphereAOLookup.fsh` shader within this application: http://sunsetlakesoftware.com/molecules has the function I use, based on this paper: http://vcg.isti.cnr.it/Publications/2006/TCM06/Tarini_FinalVersionElec.pdf . It's pretty fast. – Brad Larson Apr 08 '13 at 03:05
  • OK, thanks. Going back to the OP ... how much does this actually help? Geometry isn't a limiting factor - it's texture lookups (and sourcing texture data at high-enough res) that's the problem. – Adam Apr 08 '13 at 13:00

2 Answers2

0

Seems that there are no way to fit such huge textures in memory of mobile GPU, even into the iPad 3 one.

So you have to stream texture data. The thing you need is called clipmap (popularized by id software with extended megatexture technology).

Please read about this here, there are links to docs describing technique: http://en.wikipedia.org/wiki/Clipmap

keaukraine
  • 5,315
  • 29
  • 54
  • That sounds like the wrong technique, since it would involve a lot more effort and be lower performance than what I've already described. If I'm missing something, please explain how this helps. – Adam Apr 01 '13 at 17:28
  • Also: please re-read my question. The problem is not "can't fit huge textures into memory". We *can* fit huge textures into memory - I've already done that and it works. – Adam Apr 01 '13 at 17:29
0

This is not easily done in ES, as there is no virtual texture extension (yet). You basically need to implement virtual texturing (some ES devices implement ARB_texture_array) and stream in the lowest resolution possible (view-dependent) for your sphere. That way, it is possible to do it all in a fragment shader, no geometry subdivision is required. See this presentation (and the paper) for details how this can be implemented.

If you do the math, it is simply impossible to stream 1 GB (24,000 x 12,000 pixels x 4 B) in real time. And it would be wasteful, too, as the user will never get to see it all at the same time.

the swine
  • 10,713
  • 7
  • 58
  • 100