4

I am currently programming a graphics renderer in OpenGL by following several online tutorials. I've ended up with an engine which has a rendering pipeline which basically consists of rendering an object using a simple Phong Shader. My Phong Shader has a basic vertex shader which modifies the vertex based on a transformation and a fragment shader which looks something like this:

// PhongFragment.glsl
uniform DirectionalLight dirLight;
...
vec3 calculateDirLight() { /* Calculates Directional Light using the uniform */ }
...
void main() {
    gl_FragColor = calculateDirLight();

The actual drawing of my object looks something like this:

// Render a Mesh
bindPhongShader();
setPhongShaderUniform(transform);
setPhongShaderUniform(directionalLight1);
mesh->draw(); // glDrawElements using the Phong Shader

This technique works well, but has the obvious downside that I can only have one directional light, unless I use uniform arrays. I could do that but instead I wanted to see what other solutions were available (mostly since I don't want to make an array of some large amount of lights in the shader and have most of them be empty), and I stumbled on this one, which seems really inefficient but I am not sure. It basically involves redrawing the mesh every single time with a new light, like so:

// New Render
bindBasicShader(); // just transforms vertices, and sets the frag color to white.
setBasicShaderUniform(transform); // Set transformation uniform
mesh->draw();
// Enable Blending so that all light contributions are added up...
bindDirectionalShader();
setDirectionalShaderUniform(transform); // Set transformation uniform
setDirectionalShaderUniform(directionalLight1);
mesh->draw(); // Draw the mesh using the directionalLight1
setDirectionalShaderUniform(directionalLight2);
mesh->draw(); // Draw the mesh using the directionalLight2
setDirectionalShaderUniform(directionalLight3);
mesh->draw(); // Draw the mesh using the directionalLight3

This seems terribly inefficient to me, though. Aren't I redrawing all the mesh geometry over and over again? I have implemented this and it does give me the result I was looking for, multiple directional lights, but the frame rate has dropped considerably. Is this a stupid way of rendering multiple lights, or is it on par with using shader uniform arrays?

Thomas Paine
  • 303
  • 4
  • 13
  • have you seen this: http://www.learnopengl.com/#!Lighting/Multiple-lights – xaxxon Dec 12 '16 at 05:48
  • Yes, that tutorial uses uniform arrays, but I don't really want to do that since I might end up with a game in which there is only 1 directional light, but my shader is calculating 4 directional lights. – Thomas Paine Dec 12 '16 at 06:13
  • 1
    so use a different shader when there's only 1 light? – xaxxon Dec 12 '16 at 07:08
  • You could use a [SSBO with an shader storage block of indeterminate array length](https://www.opengl.org/wiki/Interface_Block_(GLSL)#Shader_storage_blocks), then you get rid of the large array in the shader. The rest could stay the same as in the tutorial. Unless you want to draw the geometry multiple times (which will be slow), having some kind of array in the shader is the only way to go. – BDL Dec 12 '16 at 09:37
  • may be this old QA: [How lighting in building games with unlimited number of lights works?](http://stackoverflow.com/a/31042808/2521214) will shines some light .... – Spektre Dec 12 '16 at 12:16
  • I tried researching SSBOs, but I'm a little bit confused on using them. Can I resize them when I add or delete lights or is the array size fixed from when the shader is initialized? – Thomas Paine Dec 12 '16 at 17:16

1 Answers1

2

For forward rendering engines where lighting is handled in the same shader as the main geometry processing, the only really efficient way of doing this is to generate lots of shaders which can cope with the various combinations of light source, light count, and material under illumination.

In your case you would have one shader for 1 light, one for 2 lights, one for 3 lights, etc. It's a combinatorial nightmare in terms of number of shaders, but you really don't want to send all of your meshes multiple times (especially if you are writing games for mobile devices - geometry is very bandwidth heavy and sucks power out of the battery).

The other common approach is a deferred lighting scheme. These schemes store albedo, normals, material properties, etc into a "Geometry Buffer" (e.g. a set of multiple-render-target FBO attachments), and then apply lighting after the fact as a set of post-processing operations. The complex geometry is sent once, with the resulting data stored in the MRT+depth render targets as a set of texture data. The lighting is then applied as a set of basic geometry (typically spheres or 2D quads), using the depth texture as a means to clip and cull light sources, and the other MRT attachments to compute the lighting intensity and color. It's a bit of a long topic for a SO post - but there are lots of good presentations around on the web from GDC and Sigraph.

Basic idea outlined here:

https://en.wikipedia.org/wiki/Deferred_shading

solidpixel
  • 10,688
  • 1
  • 20
  • 33
  • There's a very good tutorial for deferred shading at https://learnopengl.com/#!Advanced-Lighting/Deferred-Shading – skalarproduktraum Dec 12 '16 at 16:02
  • I am planning on implementing deferred rendering at some point but I also wanted to explore forward rendering first... So if I want to have 3 different types of light sources, with a maximum of 4 lights for each, would I need to make a ton of shaders (24 I think?) for every possible combination of light sources and amounts? – Thomas Paine Dec 12 '16 at 16:30
  • Note that ESSL supports a C-like preprocessor, so you can write some large shaders (e.g. include 4 of each light type) with each light wrapped in an "#ifdef" block. Creating variants is then as simple as changing preprocessor options, rather than having to create a lot of separate shaders by hand. – solidpixel Dec 12 '16 at 19:53