0

I'm attempting to create omnidirectional/point lighting in openGL version 3.3. I've searched around on the internet and this site, but so far I have not been able to accomplish this. From my understanding, I am supposed to

Generate a framebuffer using depth component

Generate a cubemap and bind it to said framebuffer

Draw to the individual parts of the cubemap as refrenced by the enums GL_TEXTURE_CUBE_MAP_*

Draw the scene normally, and compare the depth value of the fragments against those in the cubemap

Now, I've read that it is better to use distances from the light to the fragment, rather than to store the fragment depth, as it allows for easier cubemap look up (something about not needing to check each individual texture?)

My current issue is that the light that comes out is actually in a sphere, and does not generate shadows. Another issue is that the framebuffer complains of not being complete, although I was under the impression that a framebuffer does not need a renderbuffer if it renders to a texture.

Here is my framebuffer and cube map initialization:

framebuffer = 0;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

glGenTextures(1, &shadowTexture);
glBindTexture(GL_TEXTURE_CUBE_MAP, shadowTexture);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);GL_COMPARE_R_TO_TEXTURE);
for(int i = 0; i < 6; i++){
    glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i , 0,GL_DEPTH_COMPONENT16, 800, 800, 0,GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}

glDrawBuffer(GL_NONE);

Shadow Vertex Shader

void main(){
    gl_Position = depthMVP * M* vec4(position,1);
    pos =(M * vec4(position,1)).xyz;
}

Shadow Fragment Shader

void main(){
    fragmentDepth = distance(lightPos, pos);
}

Vertex Shader (unrelated bits cut out)

uniform mat4 depthMVP;
void main() {
    PositionWorldSpace = (M * vec4(position,1.0)).xyz;
    gl_Position = MVP * vec4(position, 1.0 );

    ShadowCoord = depthMVP * M* vec4(position, 1.0);
}

Fragment Shader (unrelated code cut)

uniform samplerCube shadowMap;
void main(){
    float bias = 0.005;
    float visibility = 1;
    if(texture(shadowMap, ShadowCoord.xyz).x < distance(lightPos, PositionWorldSpace)-bias)
        visibility = 0.1
}

Now as you are probably thinking, what is depthMVP? Depth projection matrix is currently an orthogonal projection with the ranges [-10, 10] in each direction Well they are defined like so:

glm::mat4 depthMVP = depthProjectionMatrix* ??? *i->getModelMatrix();

The issue here is that I don't know what the ??? value is supposed to be. It used to be the camera matrix, however I am unsure if that is what it is supposed to be. Then the draw code is done for the sides of the cubemap like so:

for(int loop = 0; loop < 6; loop++){
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X+loop, shadowTexture,0);
    glClear( GL_DEPTH_BUFFER_BIT);
    for(auto i: models){
        glUniformMatrix4fv(modelPos, 1, GL_FALSE, glm::value_ptr(i->getModelMatrix()));
        glm::mat4 depthMVP = depthProjectionMatrix*???*i->getModelMatrix();
        glUniformMatrix4fv(glGetUniformLocation(shadowProgram, "depthMVP"),1, GL_FALSE, glm::value_ptr(depthMVP));
        glBindVertexArray(i->vao);
        glDrawElements(GL_TRIANGLES, i->triangles, GL_UNSIGNED_INT,0);
    }
}

Finally the scene gets drawn normally (I'll spare you the details). Before the calls to draw onto the cubemap I set the framebuffer to the one that I generated earlier, and change the viewport to 800 by 800. I change the framebuffer back to 0 and reset the viewport to 800 by 600 before I do normal drawing. Any help on this subject will be greatly appreciated.

Update 1

After some tweaking and bug fixing, this is the result I get. I fixed an error with the depthMVP not working, what I am drawing here is the distance that is stored in the cubemap. https://i.stack.imgur.com/7wUMv.jpg

Basically what happens is it draws the same one sided projection on each side. This makes sense since we use the same view matrix for each side, however I am not sure what sort of view matrix I am supposed to use. I think they are supposed to be lookAt() matrices that are positioned at the center, and look out in the cube map side's direction. However, the question that arises is how I am supposed to use these multiple projections in my main draw call.

Update 2

I've gone ahead and created these matrixes, however I am unsure of how valid they are (they were ripped from a website for DX cubemaps, so I inverted the Z coord).

case 1://Negative X
    sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(-1,0,0),glm::vec3(0,-1,0));
    break;
case 3://Negative Y
    sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,-1,0),glm::vec3(0,0,-1));
    break;
case 5://Negative Z
    sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,-1),glm::vec3(0,-1,0));
    break;
case 0://Positive X
    sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(1,0,0),glm::vec3(0,-1,0));
    break;
case 2://Positive Y
    sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,1,0),glm::vec3(0,0,1));
    break;
case 4://Positive Z
    sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,1),glm::vec3(0,-1,0));
    break;

The question still stands, what I am supposed to translate the depthMVP view portion by, as these are 6 individual matrices. Here is a screenshot of what it currently looks like, with the same frag shader (i.e. actually rendering shadows) https://i.stack.imgur.com/haaPd.png As you can see the shadows seem fine, however the positioning is obviously an issue. The view matrix that I used to generate this was just an inverse translation of the position of the camera (as the lookAt() function would do).

Update 3

Code, as it currently stands: Shadow Vertex

void main(){
    gl_Position = depthMVP * vec4(position,1);
    pos =(M * vec4(position,1)).xyz;
}

Shadow Fragment

void main(){
    fragmentDepth = distance(lightPos, pos);
}

Main Vertex

void main(){
    PositionWorldSpace = (M*vec4(position, 1)).xyz;
    ShadowCoord = vec4(PositionWorldSpace - lightPos, 1);
}

Main Frag

void main(){
    float texDist = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
    float dist = distance(lightPos, PositionWorldSpace);
    if(texDist < distance(lightPos, PositionWorldSpace)
        visibility = 0.1;
    outColor = vec3(texDist);//This is to visualize the depth maps
}

The perspective matrix being used

glm::mat4 depthProjectionMatrix = glm::perspective(90.f, 1.f, 1.f, 50.f);

Everything is currently working, sort of. The data that the texture stores (i.e. the distance) seems to be stored in a weird manner. It seems like it is normalized, as all values are between 0 and 1. Also, there is a 1x1x1 area around the viewer that does not have a projection, but this is due to the frustum and I think will be easy to fix (like offsetting the cameras back .5 into the center).

user975989
  • 2,578
  • 1
  • 20
  • 38
  • Your depth component is all wrong. There is no 16-bit floating-point depth format. The only way to get a floating-point depth buffer in current hardware is to either use a 32-bit float Depth format or a 64-bit Depth+Stencil format (32-bit float Depth + 8-bit Stencil + (24-bit Unusable)). – Andon M. Coleman Aug 12 '13 at 17:17
  • You should not be using the depth MVP matrix to alter your texture lookup when using a cube map. Recall that a cube map lets you sample a point somewhere in a virtual sphere by interpolating between the appropriate 6 cube faces. Effectively, all you want is the direction from the cube map's origin (lightPos) to the fragment. – Andon M. Coleman Aug 12 '13 at 17:28
  • Thank you both for your help. The only issue now is that the texture seems to store data differently, in that it does not store the true distance but rather a version of it with a range between 0 and 1. How would I go about either a: storing the real distance, or b: un-ranging it. – user975989 Aug 12 '13 at 19:35

1 Answers1

3

If you leave the fragment depth to OpenGL to determine you can take advantage of hardware hierarchical Z optimizations. Basically, if you ever write to gl_FragDepth in a fragment shader (without using the newfangled conservative depth GLSL extension) it prevents hardware optimizations called hierarchical Z. Hi-Z, for short, is a technique where rasterization for some primitives can be skipped on the basis that the depth values for the entire primitive lies behind values already in the depth buffer. But it only works if your shader never writes an arbitrary value to gl_FragDepth.

If instead of writing a fragment's distance from the light to your cube map, you stick with traditional depth you should theoretically get higher throughput (as occluded primitives can be skipped) when writing your shadow maps.

Then, in your fragment shader where you sample your depth cube map, you would convert the distance values into depth values by using a snippet of code like this (where f and n are the far and near plane distances you used when creating your depth cube map):


float VectorToDepthValue(vec3 Vec)
{
    vec3 AbsVec = abs(Vec);
    float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));

    const float f = 2048.0;
    const float n = 1.0;
    float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
    return (NormZComp + 1.0) * 0.5;
}

Code borrowed from SO question: Omnidirectional shadow mapping with depth cubemap

So applying that extra bit of code to your shader, it would work out to something like this:


void main () {
    float shadowDepth = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
    float testDepth   = VectorToDepthValue(lightPos - PositionWorldSpace);
    if (shadowDepth < testDepth)
        visibility = 0.1;
}

Community
  • 1
  • 1
Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
  • Rather than uploading z far/near plane distance values as uniforms, you can access them through gl_DepthRange.near / gl_DepthRange.far – KaiserJohaan Nov 12 '13 at 21:24
  • 1
    @KaiserJohaan: Not in this case you cannot, the near and far values in this shader refer to the values used when the shadow map was created. More advanced shadow algorithms tend to fit the depth range to the scene for enhanced precision, so you are not guaranteed to have the same depth range when applying shadows as you had when you created the maps. – Andon M. Coleman Nov 13 '13 at 00:29