0

I'm drawing a lot of rectangles, thus I'm using the WebGL2RenderingContext.drawArraysInstanced() for it. I'm also using a lot of buffers to control shaders per instance (containing such data as position, colors, state, etc). I want now to affect the order in which my instances are being drawn (for example, to bring to the front the instance clicked by the user) and I'm looking for something like INSTANCE_ARRAY_BUFFER, something similar to ELEMENT_ARRAY_BUFFER, but for instances instead of vertices. Something that would allow me to tell "draw instances in the following order: 3,1,2,0,...". Is there any API which would allow me to control the order of the drawn instances without modifying the order of elements of all the data buffers?

Based on my research so far I suspect the answer will be "no", but maybe I missed some hidden WebGL API. Moreover, as a bonus question – was something like that introduced maybe in recent OpenGL versions or the WebGPU?

Rabbid76
  • 202,892
  • 27
  • 131
  • 174
Wojciech Danilo
  • 11,573
  • 17
  • 66
  • 132
  • @httpdigest thanks for the link. I'm using quads to display UI elements and I'm frequently changing attributes per instance. Based on your link, I see 3 archs: (1) What I've got now (rendering instances). (2) No instances, indexed geo, but requires setting attrib 4 times per "instance". (3) No instances, no indexed geo, but requires setting attrib 6 times per "instance". The arch 3. could solve my problem, as then we could use `ELEMENT_ARRAY_BUFFER` for sorting, but setting attribs would be very slow then. Of course, I need to bench these. Still, I'd not be happy to slow down attr setting. – Wojciech Danilo Dec 19 '20 at 21:07
  • With GL_DEPTH_TEST you don't need to take care of rendering order. For each rectangle you should have a depth parameter. Then you can manipulate with this parameter to bring some rectangle to the front. – Hihikomori Dec 19 '20 at 21:28
  • @МатвейВислоух that would work with shapes done with just geometry, but we are rendering shapes with SDF-based shaders applied to the quad-sprites. Such shaders allow us to have super smooth curves and smooth zooming of everything. In such a case, using depth-buffer would break our anti-aliasing. – Wojciech Danilo Dec 19 '20 at 22:53

1 Answers1

1

Your only actual question is

Is there any API which would allow me to control the order of the drawn instances without modifying the order of elements of all the data buffers?

Yes, put your data in textures, pass in an array of ids in a buffer

Some examples, not just quads here and here

I have no idea how much faster or slower this would be. My intuition is it would be slower but I haven't tested.

It would be strange to me that the only thing you're optimizing for is draw order and that you don't need to position all the quads and change their UVs. If you do need to position all the quads and change their UVs then just write them in the correct order in the first place.

Other comments based on the comments

  • Are instances the best way to draw lots of quads?

    I think the verdict here is not so clear. 10 years ago my experience was instancing was 30% slower than just creating the data for all the quads. My tests indicate that's no longer true, at least not on any GPU I own and not in WebGL.

  • Is writing 4 or 6 vertices to buffers per quad a good solution?

    It's common to just write all the data to buffers every frame for UIs. See pretty much every ImGUI library

gman
  • 100,619
  • 31
  • 269
  • 393
  • Thank you for the amazing answer, as always! Regarding positioning and UVs – I'm computing these in the vertex shader, but our GUI framework is mutable, and thus we modify only a subset of components on user interaction. Your solution with textures is amazing. However, if I understand correctly, this would require me to re-allocate a new texture with all parameters (and trash the old one) on every new component is created, right? (because the texture will contain values for all "instances"). (Of course, I can re-allocate slightly bigger one as optimization) – Wojciech Danilo Dec 20 '20 at 01:55
  • 1
    I don't know enough about what you're doing but you can allocate a texture larger than you need and manage the space inside (similar to how an std::vector in C++ allocates more memory is actually used by the array) – gman Dec 20 '20 at 02:09
  • Oh, I didn't know about that! I was sure that texture allocation just blindly occupies the GPU memory space. Amazing to know that, thank you! – Wojciech Danilo Dec 20 '20 at 04:27