5

Problem Description

Hi! In our WebGL application, we are drawing many (even hundreds of thousands) shapes and we want to discover which shape is currently under the mouse. I'm looking for a way to do it in an efficient manner.

Details

The shapes are defined with Signed Distance Functions. Each shape is drawn by applying a predefined sdf fragment shader to a square polygon (2 triangles). Each shape is assigned with a unique ID (uint) on the Rust side (we're using WASM here). The idea is to render the scene twice (in WebGL 1.0) or once to multiple render targets (in WebGL 2.0), where one of the targets would be the ID encoded as a color. Then we can use readPixels to query the color and get the ID of the shape under the mouse. Unfortunately, every solution that we try has some downsides.

Requirements

  • We need to encode 2 ints per shape (one to tell us what shape it was, like if it was a button, or maybe a slider), and second to tell us which instance of object it was (e.g. 5th slider).
  • We will have a lot of shapes (and instances) on the stage, so for each int we would need at least 24-bit, preferably 32-bit precision.

What he have tried so far

  • Rendering ID information to RGBA32UI texture type. In this solution we are using 32 bits per channel, so we can use 2 channels to represent our IDs. Unfortunately, blending applies only in RGBA mode and only if the color buffer has a fixed-point or floating-point format. We need some form of blending because when drawing shapes, like circles, some parts need to be transparent. In the case of ID color output our alpha is always 0 or 1.
  • Rendering ID information to RGBA texture and converting uint to float in GLSL by using intBitsToFloat and then back float to uint in Rust. Unfortunately, this is available in GLSL 330 and we are limited to GLSL 300 in WebGL.
  • Rendering ID information to RGB32UI texture and use discard for some pixels. This would work but it can cause performance problems and we would rather not like to use it.
  • Converting ID information on Rust side to float, using it instead of uint, and rendering it to RGBA texture, and converting it back to uint on Rust side. The problem with this solution is that it is pretty complex, we cannot use all of 32-bits (we need to be extra careful about possible NAN-encoding) and we feel there should be a better way to do it.
Michael Mauderer
  • 3,777
  • 1
  • 22
  • 49
  • What is **alpha** blending supposed to do on an integer texture? I mean, those integers are discrete ID values. How would you *"blend"* between 2 or more IDs? – IInspectable Apr 21 '20 at 14:18
  • In this specific case it is only meant to choose one or the other (we only use an alpha of 0 or 1). – Michael Mauderer Apr 21 '20 at 14:22
  • What I am after is something that can have the same effect as alpha blending for that case, but works in integers. – Michael Mauderer Apr 21 '20 at 14:36
  • Why use RGBUI32? Do you expect to draw 2^96 objects? Even just using RGBA8 would be 4 billion IDs – gman Apr 21 '20 at 15:44
  • At the moment we are using a single channel to encode the ID, which in our application itself is represented as a u32. It seemed the natural fit. – Michael Mauderer Apr 21 '20 at 15:53
  • @gman, ok you are right! In fact using `RGBA8` we will have space for 16,777,216 ids (not 4 billion, as alpha does not count in), but in fact this would be enough for what we need here. Hmm, Im wondering why we were missing such and obvious and simple solution here. We will try that and be back with info here! – Wojciech Danilo Apr 21 '20 at 16:23
  • 1
    Alpha does count. It's just another channel. It's not special. – gman Apr 21 '20 at 23:14
  • Also FYI, SDFs are generally **not best practice** and are **not preformant**. Every pixel is requiring insane amounts of calculations to figure out if any of the SDFs are in the ray for that pixel. A high perf app (nearly every game you've ever played) does not use SDFs. – gman Apr 22 '20 at 00:58
  • A few other comments. you say you need 2 ints per shape but you could use 1 int per shape and map those ints back to your 2 ids in code. You also say you need 24 to 32bit ids. That doesn't make a whole lot of sense. You can't possibly be drawing 16 million to 4 billion objects at any kind of resonable perf. Plus there just aren't that many pixels on the screen. As for RGBA8 no conversion from int to float is required. You put in Uint8 r,g,b,a as Uint32. You read as vec4, you write that vec4, you get back the same Uint8 you put in as Uint32 – gman Apr 22 '20 at 01:28
  • Also also, the spec you linked to has absolutely nothing to do with WebGL. It's not wrong but you were just lucky. Like if you asked what's 11 * 2 and someone links [here](https://en.wikipedia.org/wiki/Wyoming). It might sound like a rant but the problem is if you link to the wrong specs you lead people to the places where it will not be correct. Please fix your link – gman Apr 22 '20 at 01:31
  • @gman how can you put in uint8 (or 32) to a RGBA8 texture? Doesn't it require uploading floating point uniforms to its vec4? – davidkomer Jan 06 '21 at 07:55
  • oh my mistake, RGBA8 is UNSIGNED_BYTE data format – davidkomer Jan 06 '21 at 08:20
  • oh but it does require converting to/from float for vec4 in the shader.. – davidkomer Jan 06 '21 at 08:53

1 Answers1

2

Blending is considered to be part of the per-fragment functions that require floating point values, hence it has no effect when rendering to unnormalized integer textures.

From the spec section 4.1 lists 9 operations that happen with pixel/fragments.

Section 4.1.7 Blending which is operation 7 of the 9 operations says

Blending applies only if the color buffer has a fixed-point format. If the color buffer has an integer format, proceed to the next operation.

In other words, the blending operation is skipped if you're using an integer format.

Instead you can simply discard the fragment if the alpha value is below a given threshold.

if(alpha < 0.5) discard;
output_id = uvec4(input_symbol_id,input_instance_id,0,1);
gman
  • 100,619
  • 31
  • 269
  • 393
LJᛃ
  • 7,655
  • 2
  • 24
  • 35
  • LJᛃ, @gman , I'm working with Michael (who asked this question) and I have edited it to provide more context. We are afraid of performance penalties of `discard` so we would rather not use it if there is any other solution available. Would you be so nice and take a look at the question again and suggest us if there are other possible solutions? In fact, we have also described one solution that we believe would work there, but we are still looking for something better (if its possible). – Wojciech Danilo Apr 21 '20 at 15:56
  • [You only need to render a single pixel to do picking](https://stackoverflow.com/questions/51747996/on-the-browser-how-to-plot-100k-series-with-64-128-points-each/51757743) so there is no perf hit for discard. – gman Apr 21 '20 at 23:17
  • please link to the [correct specs](https://www.khronos.org/registry/OpenGL/specs/es/3.0/es_spec_3.0.pdf). OpenGL specs are irrelevant and often wrong for WebGL. Linking to the irrelevant OpenGL specs will lead people to mis-information and frustration which is not cool. – gman Apr 22 '20 at 01:36
  • @gman True but not being able to link the PDFs sections is quite annoying furthermore I wasn't able to find the statement in question in bespoken spec, nor in the WebGL2 spec? Please, if you find it edit the answer accordingly, thanks! – LJᛃ Apr 22 '20 at 09:50