3

I am building a relatively simple Three.js Application.

Outline: You move with your camera along a path through a "world". The world is all made up from simple Sprites with SpriteMaterials with transparent textures on it. The textures are basically GIF images with alpha transparency.

The whole application works fine and also the performance is quite good. I reduced the camera depth as low as possible so objects are only rendered quite close.

My problem is: i have many different "objects" (all sprites) with different many textures. I reuse the Textures/Materials reference for the same type of elements that are used multiple times in the scene (like trees and rocks).

Still, i'm getting to the point where memory usage is going up too much (above 2GB) due to all the textures used.

Now, when moving through the world, not all objects are visible/displayed from the beginning, even though i add all the sprites to the scene from the very start. Checking the console, the objects not visible and its textures are only loaded when "moving" further into the world when new elements actually are visible in the frustum. The, also the memory usage goes gradually up and up.

I cannot really us "object pooling" for building the world due to its "layout" lets say.

To test, i added a function that removes objects from the scene and disposes their material.map as soon as the camera passed by. Sth like

this.env_sprites[i].material.map.dispose();
this.env_sprites[i].material.dispose();
this.env_sprites[i].geometry.dispose();
this.scene.remove(this.env_sprites[i]);
this.env_sprites.splice(i,1);

This works for the garbage collection and frees up memory again. My problem is then, when moving backwards with the camera, the Sprites would need to be readded to the scene and the materials/texture loaded again, which is quite heavy for performance and does not seem the right approach to me.

Is there a known technique on how to deal with such a setup in regards to memory management and "removing" and adding objects and textures again (in the same place)?

I hope i could explain the issue well enough.

This is how the "World" looks like to give you an impression:

enter image description here

Polygonwindow
  • 51
  • 1
  • 5
  • "basically GIF images..." Image type is important when memory is a concern. GIF, JPEG,etc, use large segments of ram, as webGL uses the images as allocated arrays. A helpful article: http://blog.tojicode.com/2011/12/compressed-textures-in-webgl.html Check this stackoverflow: http://stackoverflow.com/questions/12737400/how-to-load-compressed-dds-images-as-collada-textures ..Dds images with transparency is non-trivial such as TGA for alpha channel. TEST to see if this is a viable route for you. I would look here before a frustum manager. You'll want to test users' browser dds image support. – Radio Aug 16 '16 at 20:42
  • hey radio, thanks. this is very useful input. i did not know about about those compressed formats like DDS. i will now convert the assets into DDS and see if the Memory Usage get to an acceptable level. – Polygonwindow Aug 17 '16 at 09:53
  • 1
    @Radio: hi radio, so i converted all the assets into DDS files. The result is really amazing. The memory usage went down by ~60-70% percent. Also, the general performance improved a lot. – Polygonwindow Aug 19 '16 at 20:21
  • 1
    Of course, the file size increased almost 10-fold. But gzip compression on the server brought that back down massively. – Polygonwindow Aug 19 '16 at 20:29
  • 2
    For other users: creating DDS files is quite painful and basically not possible on OS X. There are tools to create DDS but they all use a compression algorythm (pvrtc) that is not supported by WebGL. WebGL only seems to support S3 algorythm – which is still patented and thus is not implemented by open source tools and even not by tools you pay for like GraphicConverter. I had to figure this out by many tests and failures. In the end, the NVIDIA command line tools for windows worked just great. A LOT of options and i had no issues converting files. – Polygonwindow Aug 19 '16 at 20:29
  • Yes g-zip is the way with dds on the web. Yay! I am glad you saw an improvement and documented your results here which should be very helpful for anyone else looking into this. It's certainly not trivial, and quite esoteric, but I am glad you found a methodology. – Radio Aug 19 '16 at 22:46

1 Answers1

2

Each sprite individually wastes a ton of memory as your sprite imagery will probably rarely be square and because of that there will be wasted space in each sprite, plus the mipmaps which are also needed for each sprite take a lot of space. Also, on lower end devices you might hit a limit on the total number of textures you can use -- but that may or may not be a problem for you.

In any case, the best way to limit the GPU memory usage with so many distinct sprites is to use a texture atlas. That means you have at most perhaps a handful of textures, each texture contains many of your sprites and you use distinct UV coordinates within the textures for each sprite. Even then you may still end up wasting memory over time due to defragmentation in the allocation and deallocation of sprites, but you would be running out of memory far less quickly. If you use texture atlases you might even be able to load all sprites at the same time without having to deallocate them.

If you want to try and tackle the problem programmatically, there's a library I wrote to manage sprite texture atlases dynamically. I use it to render text myself, but you can use any canvas functions to create your images. It currently uses a basic Knapsack algorithm (which is replaceable) to manage allocation of sprites across the textures. In my case it means I need only 2 1024x1024 textures rather than 130 or so individual sprite textures of wildly varying sizes and that really saves a lot of GPU memory.

Note that for offline use there are probably better tools out there that can generate a texture atlas and generate UV coordinates for each sprite, though it should be possible to use node.js together with my library to create the textures and UV data offline which can then be used in a three.js scene online. That would be an exercise left to the reader though.

Something thing you can also try is to pool all your most common sprites in your "main" texture atlases which are always loaded; load and unload the less commonly used ones on the fly.

Leeft
  • 3,827
  • 1
  • 17
  • 25
  • hi leeft, thanks, thats very helpful input! And thanks for sharing the library. I will test out the memory impact when using DDS as Radio proposes. Then i will go into merging multiple assets into texture atlases. I will report back how this went. – Polygonwindow Aug 17 '16 at 09:52