1

I use the language Rust and the Glium library. I want to display a large number of circles on the screen, but I can not decide how I'd better do it.

There is an option to create circles from triangles, or I can draw them with a fragment shader, that is, take the distance from the center of each circle to each point on the screen and if it is less than the radius, then paint it in the desired color. For clarity, here is an example of how I draw one circle:

vec2 point = vec2(200.0f, 200.0f);
float dist = distance(point, gl_FragCoord.xy);

if (dist < 200)
    gl_FragColor = vec4(0.0f, 1.0f, 0.0f, 1.0f);

Which method will work faster? Are there options to do it better? The size and color of each circle will change in the run-time.

Shepmaster
  • 388,571
  • 95
  • 1,107
  • 1,366
aitvann
  • 1,289
  • 2
  • 8
  • 10
  • 2
    Do both **and benchmark it**. – Shepmaster Jul 01 '18 at 16:31
  • In part depends on your quality requirements and circle size, thus how many triangles are required. If hexagons are enough, use hexagons. – Andreas Jul 01 '18 at 16:49
  • There are also some issues with clipping at fragment shader stage since MSAA stops working, whereas triangle method does not. – Andreas Jul 01 '18 at 16:51
  • @Andreas, the last time I drew circles using triangles and the size of the number of points in the circle directly depended on its radius. that is, if the radius is 32 (pixels), then the number of points is 32. It is important for me not to see the facets. – aitvann Jul 01 '18 at 21:37
  • @Shepmaster even a benchmark can be difficult if you are just drawing circles, since you have no idea where the actual bottleneck of your application is. He could notice a 10% difference that has nothing to do with the actual efficiency of either technique. – florent teppe Jul 02 '18 at 11:40
  • If you absolutely need it pixel-perfect, fragment shader is the way to go – Andreas Jul 02 '18 at 17:55

2 Answers2

2

Faster?

Faster for CPU?

Totally faster?

Nobody knows your environment. Graphics chip can be VERY powerful. And when you can rationally utilize its power it's possible to you program will be "faster"

When you render bucket of triangles your CPU is doing work for prepare geometry parameters, etc, when you render only two triangles per draw call then work was done on GPU side. But this approach can be harder to implement, because you need to transfer raw circle data (I mean radius and center coordinates) in fragment shader. For small count of circle it's trivial, but not for many. Consider about it.

If you do it with distance field texture, then you must create it on CPU or with different draw call.

Stranger in the Q
  • 3,668
  • 2
  • 21
  • 26
-2

Well drawing with GPU(shader) should be way faster than making a whole bunch of triangles (if I'm indeed correct myself..)

JoeDortman
  • 185
  • 9
  • 3
    This is a comment, not an answer. – mcarton Jul 01 '18 at 16:47
  • 1
    Do you have any proof for this claim? Rendering a full-screen quad for each circle doesn't sound fast. Also note, that only one circle is needed which can be scaled and translated in the vertex shader. – BDL Jul 01 '18 at 17:20
  • Oh, okay, my bad ^^ – JoeDortman Jul 01 '18 at 20:43
  • @BDL, I'm thinking of making one full-screen quad and drawing all the circles on it. Why do need such a square for each circle? – aitvann Jul 01 '18 at 20:52
  • If you do it in one quad, you'll have to do blending (for smooth borders) and depth-testing also in the shader. This could work when only drawing circles. In the general case you'll have to use image atomic operations for depth testing which I highly doubt are faster then doing the usual geometry rendering. – BDL Jul 01 '18 at 21:11
  • @BDL yes, but it was gpu, not CPU work, right? And after that performance depends on gpu power? – Stranger in the Q Jul 03 '18 at 06:28