0

[Please dont't forget to read EDIT 1, EDIT 2, EDIT 3, EDIT 4 and EDIT 5]


original question:

I am following https://outerra.blogspot.com/2013/07/logarithmic-depth-buffer-optimizations.html and https://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html to enable logarithmic depth buffer in opengl.

environment:

win10, glad/glfw, assimp(importing the satellite .obj model).

when using normal depth buffer, everything is correct.

rendering with normal depth buffer

but when i tried to add a logarithmic depth buffer in vertex and fragment shader, everything went wrong.

rendering with logarithmic depth buffer

the satellites appear to have only frames left.

the near and far plane are defined as:

double NearPlane = 0.01;
double FarPlane = 5.0 * 1e8 * SceneScale;

with

#define SceneScale 0.001 * 0.001

here is my vertex shader:

#version 410 core

layout (location = 0) in vec3 position;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
// layout (location = 3) in float near; // fixed in EDIT 3
// layout (location = 4) in float far;

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform float near; // added in EDIT 3
uniform float far; // added in EDIT 3

layout (location = 0) out vec2 texCoord;
layout (location = 1) out vec3 Normal;
layout (location = 2) out vec3 FragPos;
layout (location = 3) out float logz;

void main()
{
    texCoord = aTexCoord;
    gl_Position = projection * view * model * vec4(position, 1.0f);

    // log depth buffer
    float Fcoef = 2.0 / log2(far + 1.0);
    float flogz = 1.0 + gl_Position.w;
    gl_Position.z = log2(max(1e-6, flogz)) * Fcoef - 1.0;
    gl_Position.z *= gl_Position.w; // fixed in EDIT 2
    logz = log2(flogz) * Fcoef * 0.5;

    FragPos = vec3(model * vec4(position, 1.0));
    Normal = mat3(transpose(inverse(model))) * aNormal;
}

fragment shader:

#version 410 core

out vec4 color;

layout (location = 0) in vec2 texCoord;
layout (location = 1) in vec3 Normal;
layout (location = 2) in vec3 FragPos;
layout (location = 3) in float logz;

uniform sampler2D ourTexture;
uniform vec3 lightPosition;
uniform vec3 lightColor;

void main()
{
    gl_FragDepth = logz; // log depth buffer

    float ambientStrength = 0.1;
    vec3 ambient = ambientStrength * lightColor;
    vec3 texColor = texture(ourTexture, texCoord).rgb;

    vec3 norm = normalize(Normal);
    vec3 lightDir = normalize(lightPosition - FragPos);
    float diff = max(dot(norm, lightDir), 0.0);
    vec3 diffuse = diff * lightColor;

    vec3 result = (ambient + diffuse) * texColor;
    color = vec4(result, 1.0f);
}

So anyone knows where is the wrong place? Many thanks.


EDIT 1

by setting

glEnable(GL_DEPTH_CLAMP);

I got another incorrectly rendered scene different from above.

glEnable(GL_DEPTH_CLAMP)


EDIT 2

I think I've lost an important line, though adding the line didn't give me a correctly rendered scene:

gl_Position.z *= gl_Position.w;

and the scene is as follows without glEnable(GL_DEPTH_CLAMP)

add the missing important line


EDIT 3 (partly solved)

I found that I made another mistake by writing:

layout (location = 3) in float near;
layout (location = 4) in float far;

actually it should be uniform values!

uniform float near;
uniform float far;

and everything went ok now under small scale.

But there are still problems when I tried to make the scene to a real scale, thus making

#define SceneScale 1.0

which will let the earth radius = 6378137.0m, and satellites to be a box like 1m1m1m. When rendering the earth and moon looks ok, but the two satellites seems to tore apart.

real size render with satellites tore apart


EDIT 4 (solved)

the remained problem may related to JITTER, and can be possibly solved by using Relative to center (RTC) rendering method, which can be found here

Updated methods like RTE (relative to eye) can further optimize the rendering. And I'm still working on it.

UPDATE: solved it by the following codes:

glm::dmat4 model_sate1(1.0);
model_sate1 = glm::translate(model_sate1, Sate1Pos);
model_sate1 = glm::rotate(model_sate1, EA[0].Yaw, glm::dvec3(0.0, 0.0, 1.0));
model_sate1 = glm::rotate(model_sate1, EA[0].Pitch, glm::dvec3(0.0, 1.0, 0.0));
model_sate1 = glm::rotate(model_sate1, EA[0].Roll, glm::dvec3(1.0, 0.0, 0.0));
SateShader.setMat4("model", model_sate1);
SateShader.setMat4("view", view);
SateShader.setMat4("projection", projection);
modelview = view * model_sate1;
glm::dvec3 Sate1PosV = glm::dvec3(modelview * glm::dvec4(Sate1Pos, 1.0));
modelview[0].w = Sate1PosV.x; // rtc here
modelview[1].w = Sate1PosV.y;
modelview[2].w = Sate1PosV.z;
SateShader.setMat4("ModelViewMat", modelview);
Sate1.Draw(SateShader);

and in vertex shader:

gl_Position = projection * ModelViewMat * vec4(position, 1.0f);

By applying RTC method I was able to render the scene of real size. However when the satellites start to move driven by another program, there is no jitter, but they begin to blink (pop up and disappear) under the frequency of the data refreshing rate.

I guess

  1. the satellite flies too fast in the real-scale scene.
  2. the render of the satellite model of such rapid movement require a period of time that cannot be ingored.

EDIT 5 (dealing with the new blink problem)

I found out what causes the problem in the following link

opengl update model position while camera position not updated in time causes blinking (model fly out of view)

and I solved it. See the comments of the link.

UPDATE: this problem seems to be hardware-related. I didn't see the problem happen in my two desktop computers, but it happened in my laptop.

genpfault
  • 51,148
  • 11
  • 85
  • 139
abmin
  • 133
  • 2
  • 12
  • You are setting `gl_Position` twice! It should be the last line of the vertexr to avoid optimizations ignoring code after it also are your `logz` values correct? I mean inside frustrum... What are your depth values and what distances to camera you have... If you use SI you will have large numbers that are usually not precise enough on just floats see [Is it possible to make realistic n-body solar system simulation in matter of size and mass?](https://stackoverflow.com/a/28020934/2521214) and [How to correctly linearize depth in OpenGL ES in iOS?](https://stackoverflow.com/a/42515399/2521214) – Spektre May 23 '20 at 08:26
  • @Spektre Thanks for your suggestion. However, I suppose I didn't set 'gl_Position' twice in the vertex shader. The second time I use 'gl_Position' only set its 'z' value (that is where the log depth buffer works). The two satellites are locating at (42.164, 0.061, 0.0) and (42.164, 0.088, 0.0) and the camera is locating at (42.356, 0.037, 0.021). So I suppose both satellites are in the frustrum right? I moved the camera further and both satellites are still incorrectly rendered. – abmin May 23 '20 at 08:48
  • @Spektre I've already read about the two posts several days ago. "n-body" posts with answers using multi-frustrum (which I think is more difficult for me to learn and implement), and the second post turn to linear depth buffer, which is impossible to implement in a planetary-scale scene. – abmin May 23 '20 at 08:52
  • 1. you can and should set the gl_Position in single assignment as a last thing your shader does. Some GLSL implementations tend to remove any code after first assignment to `gl_Position` in Vertex. The same goes for output color in fragment. 2. your number ranges looks OK from accuracy aspects. If you are inside frustrum or not depends on the `logz` in fragment and znear,zfar settings of your frustrum. 3. Your numbers are small you do not need multi frustrums – Spektre May 23 '20 at 08:52
  • You can check the `logz` value inside fragment with this: [GLSL debug prints](https://stackoverflow.com/a/44797902/2521214). IIRC It should be in `<0,+1>` or `<-1,0>` range but not sure which one right now – Spektre May 23 '20 at 08:55
  • @Spektre Thanks for your advise. I'll try to put gl_Position and FraColor at the last line. And I tried to use log depth buffer because I want to implement a real-size earth-moon-satellite system in the future. And I will try debug prints and it will take me a while since I am not familiar with that. 0.0 – abmin May 23 '20 at 09:03
  • You just copy paste that code to your shaders and pass font texture to it ... in fragment you need to use the same `logz` for the whole area of the text ... that could be a chalenge but you can render some flat primitive like quad parallel to camera instead of satellite... – Spektre May 23 '20 at 09:07
  • @Spektre Sorry that I can hardly follow you, I am new to opengl and I searched a lot of places to see how to implement a logarithmic depth buffer (i.e. the very first two links above), and they have almost the same simple procedure: 1) in vertex shader, calculate `gl_Position.z` by log or log2; 2) transfer `Fcoef` and `farPlane` to frag shader; 3) update `gl_FragDepth` accordingly. – abmin May 23 '20 at 12:41
  • @Spektre glad to tell that I've solved the problem through EDIT 1 ~ EDIT4 updated in the problem description. But there is another problem described in EDIT 5 with a new link. I will appreciate if you could move and have a look at it and provide any solution. :-) – abmin May 29 '20 at 14:13

0 Answers0