I'm trying to render a Teapot model from an OBJ file. I'm using the Fixed Function rendering pipeline, and I cannot change to the Programmable Pipeline. I would like to have some basic lighting and materials applied to the scene as well, so my teapot has a green shiny material applied to it. However, when I rotate the teapot around the Y-Axis, I can clearly see through to the back side of the teapot.
Here's what I've tried so far:
Changing the way OpenGL culls the faces (GL_CCW, GL_CW, GL_FRONT, GL_BACK) and none produce the correct results.
Changing which way OpenGL calculates the front of the faces (GL_FRONT, GL_CCW, GL_BACK, GL_CW) and none produce the correct results.
Testing the OBJ file to ensure that it orders its vertices correctly. When I drag the file into https://3dviewer.net/ it shows the correct Teapot that is not see-through.
Changing the lighting to see if that does anything at all. Changing the lighting does not stop the teapot from being see-through in some cases.
Disabling GL_BLEND. This did nothing
Here is what I currently have enabled:
glLightfv(GL_LIGHT0, GL_AMBIENT, light0Color);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light0DiffColor);
glLightfv(GL_LIGHT0, GL_SPECULAR, light0SpecColor);
glLightfv(GL_LIGHT0, GL_POSITION, position);
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientIntensity);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_CCW);
glFrontFace(GL_CCW);
Here are the material properties:
float amb[4] = {0.0215, 0.1745, 0.0215, 1.0};
float diff[4] = {0.07568, 0.61424, 0.07568, 1.0};
float spec[4] = {0.633, 0.727811, 0.633, 1.0};
float shininess = 0.6 * 128;
glMaterialfv(GL_FRONT, GL_AMBIENT, amb);
glMaterialfv(GL_FRONT, GL_DIFFUSE, diff);
glMaterialfv(GL_FRONT, GL_SPECULAR, spec);
glMaterialf(GL_FRONT, GL_SHININESS, shininess);
Here is the rendering code:
glClearColor(0.0, 0.0, 0.0, 1.0);
glClearDepth(1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0, 0, -150);
glRotatef(r, 0.0, 1.0, 0.0);
glScalef(0.5, 0.5, 0.5);
r += 0.5;
m.draw(0, 0, 0);
I'm not sure if it's the cause of the problem, but I've included the model loading code below just in case it's relevant:
while(std::getline(stream, line))
{
if (line[0] == 'v' && line[1] == 'n') // If we see a vertex normal in the OBJ file
{
line = line.substr(3, line.size() - 3); // Removes the 'vn ' from the line
std::stringstream ss(line);
glm::vec3 normal;
ss >> normal.x >> normal.y >> normal.z;
tempNormalData.push_back(normal);
}
if (line[0] == 'v') // If we see a vertex on this line of the OBJ file
{
line = line.substr(2, line.size() - 2); // Removes the 'v ' from the line
std::stringstream ss(line);
glm::vec3 position;
ss >> position.x >> position.y >> position.z;
tempVertData.push_back(position);
}
if (line[0] == 'f') // If we see a face in the OBJ file
{
line = line.substr(2, line.size() - 2); // Removes the 'f ' from the line
std::stringstream ss(line);
glm::vec3 faceData;
ss >> faceData.x >> faceData.y >> faceData.z;
tempFaceData.push_back(faceData);
}
}
if (tempVertData.size() != tempNormalData.size() && tempNormalData.size() > 0)
{
std::cout << "Not the same number of normals as vertices" << std::endl;
}
else
{
for (int i = 0; i < (int)tempVertData.size(); i++)
{
Vertex v;
v.setPosition(tempVertData[i]);
v.setNormal(tempNormalData[i]);
vertices.push_back(v);
}
for (int i = 0; i < tempFaceData.size(); i++)
{
Vertex v1 = vertices[tempFaceData[i].x - 1];
Vertex v2 = vertices[tempFaceData[i].y - 1];
Vertex v3 = vertices[tempFaceData[i].z - 1];
Face face(v1, v2, v3);
faces.push_back(face);
}
}
}
Lastly, when I draw the faces I just loop through the faces list and call the draw function on the face object. The face draw function just wraps a glBegin(GL_TRIANGLES) and a glEnd() call:
for (int i = 0; i < (int)faces.size(); i++)
{
auto& f = faces[i];
f.draw(position);
}
Face draw function:
glBegin(GL_TRIANGLES);
glVertex3f(position.x + v1.getPosition().x, position.y + v1.getPosition().y, position.z + v1.getPosition().z);
glNormal3f(v1.getNormal().x, v1.getNormal().y, v1.getNormal().z);
glVertex3f(position.x + v2.getPosition().x, position.y + v2.getPosition().y, position.z + v2.getPosition().z);
glNormal3f(v2.getNormal().x, v2.getNormal().y, v2.getNormal().z);
glVertex3f(position.x + v3.getPosition().x, position.y + v3.getPosition().y, position.z + v3.getPosition().z);
glNormal3f(v3.getNormal().x, v3.getNormal().y, v3.getNormal().z);
glEnd();
I don't really want to implement my own Z-Buffer culling algorithm, and I'm hoping that there is a really easy fix to my problem that I'm just missing.
SOLUTION (thanks to Genpfault)
I had not requested a depth buffer from OpenGL. I'm using Qt as my windowing API, so I had to request it from my format object as follows:format.setDepthBufferSize(32);
This requests a depth buffer of 32 bits, which fixed the issue.