0

I am using WebGL to draw a simple frustum geometry. I tried to color the front (smaller) and back (bigger) surface red, and the side surface white.

result

This is how it looks. Note that the smaller surface is nearer the eyes and the bigger surfaces is further from the eye. Do not be fooled by this image. It looks like the shader chose to color the sides which are furthest to the eyes and ignore faces up in the front.

How should I make this work correctly?

Followed is my shader setup: The depth buffer is cleared during initialization and never used.

function init(){
  // Retrieve <canvas> element
  var canvas = document.getElementById('webgl');

  // Get the rendering context for WebGL
  gl = getWebGLContext(canvas);
  if (!gl) {
    console.log("lib1.js: init() failed to get WebGL rendering context 'gl'\n");
    console.log("from the HTML-5 Canvas object named 'canvas'!\n\n");
    return;
  }

  // Initialize shaders
  if (!initShaders(gl, VSHADER_SOURCE, FSHADER_SOURCE)) {
    console.log('lib1.js: init() failed to intialize shaders.');
    return;
  }

  bufferSetup(gl);

  // Set the background-clearing color and enable the depth test
  gl.clearColor(0.0, 0.0, 0.0, 1.0);  // black!
  gl.enable(gl.DEPTH_TEST);
  gl.enable(gl.CULL_FACE);   // draw the back side of triangles
  gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
}

Thanks for Denis pointing this out! I also tried turning the gl.CULL_FACE on and off, and tried gl.BACK and gl.FRONT. None of them works correctly.

  gl.disable(gl.CULL_FACE);
  // gl.cullFace(gl.BACK)
Y. Luo
  • 61
  • 3
  • The answer would depend on your shader code and the way you specified your vertices, please add these details to your question. – idoby Jan 26 '19 at 08:25
  • @idobyThanks! I've added these to the question – Y. Luo Jan 26 '19 at 08:33
  • 3
    try gl.cullFace(gl.BACK); / gl.cullFace(gl.FRONT) or plays with culling off to see if that helps with your issue. Most probably you got the vertices "the wrong way around" which means the renderer considers the frontface the other face of the triangle of what you think is the front facing side. – Denis Jan 26 '19 at 08:51
  • @Denis Thanks! I was trying this, sorry I didn't add this to the description. I tried both turning the gl.CULL_FACE on and off, also tried gl.BACK and gl.FRONT. None of them works. – Y. Luo Jan 26 '19 at 09:14
  • *"Note that the smaller surface is nearer the eyes and the bigger surfaces is further from the eye."* - Do you use perspective projection? Is the further away surface the larger one, but seems to be smaller because of the perspective projection? – Rabbid76 Jan 26 '19 at 12:15
  • please make an [MVCE](https://meta.stackoverflow.com/a/349790/128511) – gman Jan 27 '19 at 06:11

1 Answers1

0

We know that the CVV coordinate system, the non-transformed drawing axes (CVV == the canonical view volume, the +/-1 cube volume we depict on-screen) have RIGHT-HANDED coordinates, with origin at the center of the CVV cube, x increasing rightwards, y increasing upwards, and z increasing outwards, towards your eye. But it is not necessary that the fragments with LARGER, MORE-POSITIVE Z depths in the CVV get drawn as 'nearer' to us, and the fragments with SMALLER, MORE-NEGATIVE Z at the same screen location be overlapped. Why?

WebGL (and OpenGL-ES, and OpenGL) has a historical quirk in an otherwise-sensible method for 3D drawing: it defines 'depth' as a computed value between 0.0 and 1.0, and it is NOT simply the z value in CVV coordinates.

By default, WebGL defines depth as:

  • 0.0 for 'least' depth, nearest the screen or camera; (z = +1 in CVV)
  • 1.0 for 'most' depth, farthest from the screen or camera; (z = -1 in CVV)

While the GPU computes depth automatically, correct results require vertices that were transformed by a 3D camera-projection matrix to mimic a camera lens. Sometimes, if we didn't define or use the projection matrix, the GPU depth calculation reduces to:

depth = 0.5 + 0.5*(gl_position.z / gl.position.w);

In other words, without this 'camera-projection' matrix, the GPU computes 'depth' backwards -- vertices at z = -1 get depth of 0 when it SHOULD be 1. For more details about how the GPU computes depth, see: How does WebGL set values in the depth buffer?

Known this, we could easily fix this by either:

  1. We could write code that always sets the first ModelMatrix transform to change the sign of z, like this: myMatrix.setScale(1,1,-1); Then all z values we compute will have reversed sign. (That's a little messy when combined with a complicated scene-graph...).
  2. We could leave the z values unchanged, and instead tell WebGL to apply different rules for its use of the depth-buffer; that is a more snesible solution. Try add these lines of code
gl.enable(gl.DEPTH_TEST); // enabled by default, but let's be SURE.
gl.clearDepth(0.0);       // each time we 'clear' our depth buffer, set all
                          // pixel depths to 0.0  (1.0 is DEFAULT)
gl.depthFunc(gl.GREATER); // gl.LESS is DEFAULT; reverse it!
                          // draw a pixel only if its depth value is GREATER
                          // than the depth buffer's stored value.

Credit to my Professor on computer graphics - John E. Tumblin

Y. Luo
  • 61
  • 3