4

I'd like to enumerate those general, fundamental circumstances under which multi-pass rendering becomes an unavoidable necessity, as opposed to keeping everything within the same shader program. Here's what I've come up with so far.

  • When a result requires non-local fragment information (i.e. context) around the current fragment, e.g. for box filters, then a previous pass must have supplied this;
  • When a result needs hardware interpolation done by a prior pass;
  • When a result acts as pre-cache of some set of calculations that enables substantially better performance than simply (re-)working through the entire set of calculations in those passes that use them, e.g. transforming each fragment of the depth buffer in a particular and costly way, which multiple later-pass shaders can then share, rather than each repeating those calculations. So, calculate once, use more than once.

I note from my own (naive) deductions above that vertex and geometry shaders don't really seem to come into the picture of deferred rendering, and so are probably usually done in first pass; to me this seems sensible, but either affirmation or negation of this, with detail, would be of interest.

P.S. I am going to leave this question open to gather good answers, so don't expect quick wins!

Engineer
  • 8,529
  • 7
  • 65
  • 105

2 Answers2

2

Nice topic. For me since I'm a beginner I would say to avoid unnecessary calculations in the pixel/fragment shader you get when you use forward rendering. With forward rendering you have to do a pass for every light you have in your scene, even if the pixel colors aren't affected. But that's just a comparison between forward rendering and deferred rendering.

As opposed to keeping everything in the same shader program, the simplest thing I can think of is the fact that you aren't restricted to use N number of lights in your scene, since in for instance GLSL you can use either separate lights or store them in a uniform array. Then again you can also use forward rendering, but if you have a lot of lights in your scene forward rendering has a too expensive pixel/fragment shader. That's all I really know so I would like to hear other theories as well.

Tom Quareme
  • 96
  • 1
  • 5
  • Thanks for the input. Vertex vs fragment lighting is a case of being limited by number of objects/vertices as opposed to number of screen pixels. The crux is that in order to do pixel lighting, we need some pre-cached information -- this is in my list. However, you got me thinking about normals and interpolation -- another reason we often need to do pre-passes! Added to my list :) Cheers for assisting the process. – Engineer Jan 31 '15 at 13:59
  • There are also tiled forward renderers that divide the screen in eg. 16x16 tiles and calculate lighting only for lights affecting a tile. Examples are Forward+ and Clustered Forward. – SurvivalMachine Feb 02 '15 at 14:45
1

Deferred / multi-pass approaches are used when the results of the depth buffer are needed (produced by rendering basic geometry) in order to produce complex pixel / fragment shading effects based on depth, such as:

  • Edge / silhouette detection
  • Lighting

And also application logic:

  • GPU picking, which requires the depth buffer for ray calculation, and uniquely-coloured / ID'ed geometries in another buffer for identification of "who" was hit.
Engineer
  • 8,529
  • 7
  • 65
  • 105