2

So Im trying to figure out how to draw a single textured quad many times. My issue is that since these are create and deleted and every one of them has a unique position and rotation. Im not sure a vbo is the best solution as I've heard modifying buffers is extremely slow on android and it seems I would need to create a new one each frame since different quads might disappear randomly (collide with an enemy). If I simply do a draw call for each one I get 20fps around 100, which is unusable. any advice?

Edit: I'm trying to create a bullethell, but figuring out how to draw 500+ things is hurting my head.

genpfault
  • 51,148
  • 11
  • 85
  • 139
Metalith
  • 103
  • 8

1 Answers1

4

I think you're after a particle system. A similar question is here: Drawing many textured particles quickly in OpenGL ES 1.1.

Using point sprites is quite cheap, but you have to do extra work in the fragment shader and I'm not sure if GLES2 supports gl_PointSize if you need different sized particles. gl_PointSize Corresponding to World Space Size

My go-to particle system is storing positions in a double buffered texture, then draw using a single draw call and a static array of quads. This is related but I'll describe it a bit more here...

  1. Create a texture (floating point if you can, but this may limit the supported devices). Each pixel holds the particle position and maybe rotation information.
  2. [EDITED] If you need to animate the particles you want to change the values in the texture each frame. To make it fast, get the GPU to do it in a shader. Using an FBO, draw a fullscreen polygon and update the values in the fragment shader. The problem is you can't read and write to the same texture (or shouldn't). The common approach is to double buffer the texture by creating a second one to render to while you read from the first, then ping-pong between them.
  3. Create a VBO for drawing triangles. The positions are all the same, filling a -1 to 1 quad. However make texture coordinates for each quad address the correct pixel in the above texture.
  4. Draw the VBO, binding your positions texture. In the vertex shader, read the position given the vertex texture coordinate. Scale the -1 to 1 vertex positions to the right size, apply the position and any rotation. Use the original -1 to 1 position as the texture coordinate to pass to the fragment shader to add any regular colour textures.

    If you ever have a GLSL version with gl_Vertex, I quite like generating these coordinates in the vertex shader, saving storing unnecessarily trivial data just to draw simple objects. This for example.

  5. To spawn particles, use glTexSubImage2D and write a block of particles into the position texture. You may need a few textures if you start storing more particle attributes.
Community
  • 1
  • 1
jozxyqk
  • 16,424
  • 12
  • 91
  • 180
  • So what your saying is I create a texture ( a picture) then I use the RGBA channels to correspond to XYZR. Pass this texture into the vertex shader, then create a vbo with same number of points as pixels in the texture. Then, draw a bunch of point sprites that will show the actual picture I want them to show? Also, im not sure what you mean by number 2 – Metalith Jun 15 '15 at 06:17
  • @Metalith yes, although if point sprites are limiting, you might want to use triangles (6 vertices = two triangles for a quad, per pixel in the texture). Point 2 is for dynamically updating the texture, so your particles can move over time. Will update it. – jozxyqk Jun 15 '15 at 06:22
  • Hmmm very interesting. Why two textures though? Couldnt I just use one and update that in the beginning of onDrawFrame() and use that? Sorry for not understanding – Metalith Jun 15 '15 at 06:27
  • @Metalith Yes, but the CPU will be slower than the GPU at animating. Mobile devices have shared memory (afaik), so the transfer probably isn't much of an issue, but on a desktop streaming a texture to the GPU each frame is quite slow. – jozxyqk Jun 15 '15 at 06:31
  • Ah... I see I see. Thank you very much then – Metalith Jun 15 '15 at 06:38
  • @Metalith On most desktop GPUs this approach will easily handle millions. Not sure about mobiles but I think it should be fine unless your particles are huge. If so you might want occlusion culling or to draw particles with the texture of many particles. – jozxyqk Jun 15 '15 at 06:53
  • Holy crap, without even implementing a vbo and just using 600+ of points in a coordinate buffer then drawing those at point sprites shows 0 fps drop. Thats incredible! nvm: 3000+. Your method definitely works – Metalith Jun 15 '15 at 07:53
  • Last question. How do you handle deletion of quads? it seems if I try to remove any quads I would have to recreate my texture every frame since lots of bullets do get deleted. Is that ok? performance-wise I mean – Metalith Jun 15 '15 at 15:30
  • Just move it off-screen. If you have whole ranges you don't need to draw, you could be selective about which parts of the VBO you draw. – jozxyqk Jun 15 '15 at 15:40
  • So the way I have it now, is that I created a texture in gimp that is 1x2000 and is RGB565. Would that work better than editing a framebuffer object since its not nearly as many pixels? – Metalith Jun 16 '15 at 10:41
  • A 1D texture is fine, but why are you creating it in gimp? I would have thought you'd want to programmatically spawn quads. Also RGB565 will be quite poor precision for positioning quads (you'd have at most 32x64 unique positions), which is why I suggested floating point. I wrote [this](https://github.com/aantthony/digulator/blob/master/objects/particles.js) a while back. It was written in a few hours so not great but it might give you an idea of what I had in mind. – jozxyqk Jun 16 '15 at 10:49
  • Wait, so instead of drawing thousands of quads you manually translate with glBufferSubData, you render it once with colors for position/velocity at those positions. Then you draw it using those texture? Trying to wrap my head around these new concepts is painful – Metalith Jun 16 '15 at 23:25
  • Oooooh. So it binds the texPosition and texVelocity to the frameBuffer which updates the particles position and velocity into that texture. Then when it does the actual drawing it binds those frameBuffertextures to the actual drawing shaders for the particle and uses those to draw each particle to the correct position? Thats genius. – Metalith Jun 17 '15 at 02:39
  • I think the only thing left is to figure out how vec2 coord = gl_FragCoord.xy / vpSize; vec4 position = texture2D(positions, coord); works and how the actual particle find where they are suppose to be in the final drawing – Metalith Jun 17 '15 at 03:17