3

I'm having a terrible time figuring out a way to better-handle the seams between 3D tile objects in my game engine. You only see them when the camera is tilted down at a far enough angle like this... I do not believe it is a texture problem or a texture rendering problem (but I could be wrong).

Below are two screenshots - the first one demonstrates the problem, while the second is the UV wrapping I'm using for the tiles in Blender. I'm providing room in the UVs for overlap, such that if the texture needs to overdraw during smaller mip-maps, I should still be good. I am loading textures with the following texture params:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

It appears to me that the sides of the 3D tiles are slightly being drawn, and you especially notice the artifact due to the lighting angle (directional) that is being applied from this angle.

Are there any tricks or things I can check to eliminate this effect? I am rendering in "layers", but within those layers based on camera distance (furthest away first). All of these objects are in the same layer. Any ideas would be greatly appreciated!

If useful, this is a project for iPhone/iPad using OpenGLES2.0. I'm happy to provide any code snippets - just let me know what might be a good place to start.

Screenshot from engine demonstrating seams between tiles at low angles

UVs from Blender

UPDATE WITH VERTEX/PIXEL SHADER & MODEL VERTICES

Presently, I am using PowerVR's POD format to store model data exported from Blender (via Collada then Collada2Pod converter by PowerVR). Here's the GL_SHORT vertex coords (model space):

64 -64 32
64 64 32
-64 64 32
-64 -64 32
64 -64 -32
-64 -64 -32
-64 64 -32
64 64 -32
64 -64 32
64 -64 -32
64 64 -32
64 64 32
64 64 32
64 64 -32
-64 64 -32
-64 64 32
-64 64 32
-64 64 -32
-64 -64 -32
-64 -64 32
64 -64 -32
64 -64 32
-64 -64 32
-64 -64 -32

So everything should be perfectly flush, I would expect. Here's the shaders:

attribute highp vec3  inVertex; 
attribute highp vec3  inNormal;
attribute highp vec2  inTexCoord;

uniform highp mat4  ProjectionMatrix;
uniform highp mat4  ModelviewMatrix;
uniform highp mat3  ModelviewITMatrix;
uniform highp vec3  LightColor;
uniform highp vec3  LightPosition1;
uniform highp float LightStrength1;
uniform highp float LightStrength2;
uniform highp vec3  LightPosition2;
uniform highp float Shininess;

varying mediump vec2  TexCoord;
varying lowp    vec3  DiffuseLight;
varying lowp    vec3  SpecularLight;

void main()
{
    // transform normal to eye space
    highp vec3 normal = normalize(ModelviewITMatrix * inNormal);

    // transform vertex position to eye space
    highp vec3 ecPosition = vec3(ModelviewMatrix * vec4(inVertex, 1.0));

    // initalize light intensity varyings
    DiffuseLight = vec3(0.0);
    SpecularLight = vec3(0.0);

    // Run the directional light
    PointLight(true, normal, LightPosition1, ecPosition, LightStrength1);
    PointLight(true, normal, LightPosition2, ecPosition, LightStrength2);

    // Transform position
    gl_Position = ProjectionMatrix * ModelviewMatrix * vec4(inVertex, 1.0);

    // Pass through texcoords and filter
    TexCoord = inTexCoord;
}      
David
  • 597
  • 5
  • 17
  • Maybe a precision problem for the vertexs in the vertex shader ? – Johnmph Aug 29 '11 at 23:06
  • Possible... I'll check and get back with you. I think I'm using highp designator... I'm storing the actual vertex data in GL_SHORT for packed attributes. Could that be causing a problem? I think I had this issue though prior to switching from GL_FLOAT to GL_SHORT. – David Aug 30 '11 at 02:05
  • John, could you take a look at the vertex shader? I'm using a high vec3 for my inVertex, and looking at the GL_SHORTs I have coming in from the model file for this 3D object, I can't see where any precision errors could be coming into play. Am I missing something obvious maybe? – David Aug 30 '11 at 14:43
  • It looks correct, check at the answer below, i also think it's because the result of a computation is truncated because of the finite precision of variables – Johnmph Sep 01 '11 at 01:48

2 Answers2

2

I do not know how your boxes are drawn - but I believe this is the issue. When computing the vertices for each box I guess you do something like this (pseudocode)

int i,j,k;
float widht

for i,j,k in dims: 
  upper_left = (i,j,k)*width;
  upper_right = (upper_left.x+width,upper_left.y,upper_left.z);
  lower_left = (upper_left.x, upper_left.y+width,upper_left.z);
  lower_right = (upper_left.x+width,upper_left.y+width,upper_left.z);

This will fail because you lose precision when adding, so corners that should share the same position actually do not. This is what is creating the gaps.

instead you should do something like this

for i,j,k: 
  upper_left = (i,j,k)*width;
  upper_right = (i+1,j,k)*width;
  lower_left = (i,j+1,k)*width;
  lower_right = (i+1,j+1,k)*width;

This will ensure that the corners will use the same coordinates.

EDIT

It is still a precision problem. From what I understand, you are doing a drawcall per block, where the only thing that changes per block is the ModelviewMatrix. This means that you are expecting that this line

position = ProjectionMatrix * ModelviewMatrix * vec4(inVertex, 1.0);

will give the same value for two different values of inVertex and ModelviewMatrix, which is wrong.

To solve this you can do "fake" instancing (since ES do not support instancing), by saving the per-instance values in uniforms and compute the per-attribute values from an index given in an attribute.

Edit 2:

Ok, please do not think about "fake" instancing for now. Now, please just ensure that the coordinates are the same. So instead of providing coordinates for only one block, provide them for all the blocks instead, and use only one ModelViewMatrix. That will probably be faster as well.

  • Arne, thanks for the quick response! Presently, I'm using GL_SHORTS to store model vertices in PowerVR's .POD file format. I have updated the original post to show the vertices used in the model - as you can see, at least in this case, I don't think it is a precision issue. I'll also post the shader I'm using, in case that might be causing a problem... – David Aug 30 '11 at 14:37
  • I think I understand what you're suggesting... the ModelviewMatrix is relative to each model's position to the camera... so if you have two objects right next to each other sharing the top left and top right vertices (left of obj1, right of obj2), that vertex is in the same world coordinates, but its modelviewmatrix is a slightly different creating this artifact. Is that right? I think I get what instancing is (rendering one object many times with unique attributes?), but not sure how I could pull this off. Could you provide just a little more detail? I think I almost have it! – David Aug 30 '11 at 22:05
  • 1
    Well - OpenGL ES does not support instancing, so the simple solution is to create all the coordinates on the device. What I mentioned is a trick to get around the fact that GLES does not support instancing. Is is a quite special thing, you save all the data that is per-instance (the position the model has in the world), and the data that is per-vertex(the relative positions) in uniforms, and use an identifier on each vertex to indicate which vertex and instance variables to use. It is probably not worth to look at in your case at the moment, since it will probably only complicate your code. – Arne Bergene Fossaa Aug 30 '11 at 23:11
  • Arne, that makes a lot more sense now... I was probably foggy late last night from a long day's coding! I have a busy day ahead, but I'll play with it later tonight and get back with you. Thanks so much for taking the time to provide such helpful insight! – David Aug 31 '11 at 14:08
  • Just an update, I think this is the solution - but I am still playing with it. Got bogged down with some other problems that came up when updating to the new Xcode beta - go figure! I'll wrap up soon and get back with you... thanks again! – David Sep 01 '11 at 17:59
  • Turns out the "fix" for me was increasing the depth buffer's precision. The problem doesn't seem to be that the vertices aren't lining up properly, it's that they ARE lining up in the same location, so the poly's between vertices are exactly on top of each other. The further the camera is away, the more z-fighting you see. I moved the camera's near clip from 1.0f to 25.0f, which works well still in the game and dramatically solved the problem. – David Sep 02 '11 at 04:10
  • I did spend a lot of time this evening trying to figure out what you were meaning though by using one ModelView matrix... my ModelView matrix is storing: an object's 1) Rotation, 2) Scale, 3) Translation (world position) (that's all the World matrix), but also the camera's information (View matrix). How can I use one to render all my blocks, when (if nothing else) the world positions are all unique? The inVertex structure just contains the model vertices (in model coordinates). I may still try to batch a lot of the tiles for performance either way. Am I misunderstanding though? Thanks! – David Sep 02 '11 at 04:12
1

try these

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Lijiayu
  • 7
  • 1
  • I'm afraid those filter settings didn't help - still have the same visual artifacts at the lower angles. Thanks though, great idea! – David Sep 02 '11 at 02:27