Can the rendering for a pixel be terminated in a vertex shader. For example if a vertex does not meet a certain requirement cancel the rendering of that vertex?
-
How should that look like if you were rendering a triangle and decided that one vertex should not be rendered? I think your question already implies a geometrical problem - but you could move the vertex to the position of another(next or previous) vertex in the same polygon, more or less eliminating it (you would still see the effects of texture/shading etc there in the final image) – griffin Aug 13 '13 at 13:25
6 Answers
I'll assuming you said "rendering for a vertex be terminated". And no, you can't; OpenGL is very strict about the 1:1 ratio of input vertices to outputs for a VS. Also, it wouldn't really mean what you want it to, since vertices don't get rendered. Primitives do, and a primitive can be composed of more than one vertex. What would it mean to discard a vertex in the middle of a triangle strip, for example.
This is why Geometry Shaders have the ability to "cull" primitives; they deal specifically with a primitive, not merely a single vertex. This is done by simply not emitting any vertices; GS's must explicitly emit the vertices that it wants to output.
Vertex shaders now have the ability to cull primitives. This is done using the "cull distance" feature of OpenGL 4.5. It's like gl_ClipDistance
, only instead of clipping, it culls the entire primitive if one of the vertices crosses the threshold.

- 449,505
- 63
- 781
- 982
In theory, you can use a vertex shader to produce a degenerate (zero-area) primitive. A primitive with zero area should not result in anything rasterized, and thus no fragment will be rendered. It is not particularly intuitive, however, especially if you are using primitives that share vertices.
But no, canceling a vertex is almost meaningless. It is the fundamental unit upon which primitives are constructed. If you simply remove a single vertex, then you will alter the rasterized output in undefined ways.
Put simply, vertices are not what create pixels on screen. It is the connectivity between vertices, which creates primitives, that ultimately lead to pixels. Geometry Shaders operate on a primitive-by-primitive basis, so they are generally where you would cancel rasterization and fragment shading in a programatic fashion.
UPDATE:
It has come to my attention that you are using GL_POINTS
as your primitive type. In this special case, all you have to do to prevent your vertex from going further down the pipeline is set its position somewhere outside of your camera's viewing volume. The vertex will be clipped and no rasterization or fragment shading will occur.
This is a much more efficient solution to testing for some condition in a fragment shader and then discarding, because you skip rasterization and do not have to execute a fragment shader at all. Not to mention, discard
usually winds up working as a post-shader execution flag that tells the GPU to discard the result - the GPU is often forced to execute the entire shader no matter where in the shader you issue the discard
instruction. Thus discard
rarely gives a performance benefit, and in many cases it can disable other potentially more useful hardware optimizations. This is the nature of the way GPUs schedule their shader workload, unfortunately.
The cheapest fragment is the one you never have to process :)

- 42,359
- 2
- 81
- 106
-
-
Yes it should, and it's almost as easy as adding discard. Perhaps the questioner overestimates the difficulty of learning about geometry shaders? – Thomas Poole May 03 '17 at 07:12
You can't terminate rendering of a pixel in a vertex shader (it doesn't deal with pixels), but you can in the fragment shader using the discard instruction.

- 641
- 4
- 7
-
5That actually only discards the results of the pixel. On modern hardware (Shader Model 3.0+), pixels are drawn in 2x2 blocks to make the derivative instruction work... these 2x2 blocks may be scheduled into warps (NV) or wavefronts (AMD) of shaders that run simultaneously. The only way to early-out of fragment shading is for every fragment in the warp/wavefront to discard. Otherwise, it will continue evaluating all of the instructions but will throw out the result at the end on a pixel-by-pixel basis. Very often `discard` or `kill` does not actually terminate anything, it just looks like it :) – Andon M. Coleman Aug 13 '13 at 19:02
-
1
-
1Ah, I was under the impression that you wanted to skip the fragment shader stage to gain some performance advantage if a vertex didn't meet requirements. You said you're using GL_POINTS, so you could always set the vertex position to a point BEHIND your camera in your vertex shader, and this will actually give you a performance advantage - `discard` gives no performance advantage and can actually break certain hardware optimizations by the way :( – Andon M. Coleman Aug 18 '13 at 16:58
-
Don't let a prospective employer see you using discard; as many people have a lot of good reasons against it. They may be wrong; I'm not sure. I don't claim to understand all the reasons people have for never using discard, but I do believe it depends a lot on hardware. Isn't it a problem for tiled renderers? Something to do with depth ordering optimisations? – Thomas Poole May 03 '17 at 07:04
-
I thought for a moment that Andon was under the impression that the distinction between a pixel and fragment is that there are four pixels to a fragment. I almost corrected it, before reading more carefully. – Thomas Poole May 03 '17 at 07:17
I am elaborating on Andon M. Coleman answer, which deserves IMHO to be marked as the right one.
Even though the OpenGL specification is adamant about the fact that you cannot skip the fragment shader step (unless you actually remove the whole primitive in the geometry shader, as Nicol Bolas correctly pointed out, which is a bit overkill imho), you can do it in practice by letting OpenGL cull the whole geometry, as modern GPUs have early fragment rejection optimizations which will likely produce the same effect.
And, for the records, making the whole geometry get discarded is really really easy: just write the vertex outside the (-1, -1, -1),(1,1,1) cube,
gl_Position = vec4(2.0, 2.0, 2.0, 1.0);
...and off you go!
Hope this helps

- 3,121
- 25
- 43
You can make alterations to the vertex stream, including the removal of vertices, but the place to do that would be in a geometry shader. If you look into geometry shaders, you may find the solution you're looking for in simply failing to 'emit' a vertex.
EDIT: If rendering a triangle strip you would probably also want to take the care to start a new primitive, when a vertex is removed; you'll see why if you investigate geom. shaders. With GL_POINTS it would be less of an issue.
And yes, if you send a triangle strip of only 2 vertices, for instance, then indeed you fail to render anything -- just as you would do if you passed in such a degenerate strip in the first place. That does not mean the vertex stream can't be altered on the GL side of things, however.
Hope that helps >Tom

- 302
- 4
- 10
set the position out of ndc
or
set flag and pass to fragment, and discard in fragment according to the flag

- 4,953
- 45
- 48