0

I'm currently developing an .obj file loader on Android. I have done the basics, and the 3d mesh is drawn correctly with OpenGl. Unfortunately I have a problem in binding the texture. Let me explain with more details:

The .obj file has the following structure:

v -0.751804 0.447968 -1.430558
v -0.751802 2.392585 -1.428428
... etc list with all the vertices ...

vt 0.033607 0.718905
vt 0.033607 0.718615
... etc list with all the texture coordinates ...

f 237/1 236/2 253/3 252/4
f 236/2 235/5 254/6 253/3
... etc list with all the faces ...

The f lines idicate the index where the appropriate vertex and texture coordinates are stored, like

f vertex_index/texture_coord_index

So my program

  1. parses the vertices and stores them in a Vector<Float>,
  2. parses the texture coordinates and stores them in a Vector<Float>
  3. and finally parses the faces and stores every vertex index in a Vector<Short> and every texture coordinate index in a Vector<Short>

After all this code, I'm creating the appropriate buffers:

public void buildVertexBuffer(){
    ByteBuffer vBuf = ByteBuffer.allocateDirect(vertices.size() * 4);
    vBuf.order(ByteOrder.nativeOrder());
    vertexBuffer = vBuf.asFloatBuffer();
    vertexBuffer.put(toFloatArray(vertices));
    vertexBuffer.position(0);
}

where vertices is the vector that stores the float vertices

public void buildFaceBuffer(){
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(faces.size() * 2);
    byteBuffer.order(ByteOrder.nativeOrder());
    faceBuffer = byteBuffer.asShortBuffer();
    faceBuffer.put(toShortArray(faces));
    faceBuffer.position(0);
}

where faces is the vector that stores the indices and

public void buildTextureBuffer(Vector<Float> textures){
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(texturePointers.size() * 4 * 2);
    byteBuffer.order(ByteOrder.nativeOrder());
    textureBuffer = byteBuffer.asFloatBuffer();

    for(int i=0; i<texturePointers.size(); i++){
        float u = textures.get(texturePointers.get(i) * 2);
        float v = textures.get(texturePointers.get(i) * 2 + 1);

        textureBuffer.put(u);
        textureBuffer.put(v);
    }
    textureBuffer.position(0);  
}

where textures are the float texture coordinates and texturePointers point to the textures' values.

The binding happens here:

public int[] loadTexture(GL10 gl, Context context){
    if(textureFile == null)
        return null;

    int resId = getResourceId(textureFile, R.drawable.class);

    if(resId == -1){
        Log.d("Bako", "Texture not found...");
        return null;
    }

    Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resId);

    int[] textures = new int[1];
    gl.glGenTextures(1, textures, 0);
    gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);

    gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
    gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);

    /*gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
    gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);*/

    /*int size = bitmap.getRowBytes() * bitmap.getHeight();
    ByteBuffer buffer = ByteBuffer.allocateDirect(size);
    bitmap.copyPixelsToBuffer(buffer);
    gl.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmap.getWidth(), bitmap.getHeight(), 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_INT, buffer);*/

    GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);

    bitmap.recycle();

    return textures;
}

Finally my draw() method of my mesh looks like this

public void draw(GL10 gl){
    if(bindedTextures != null){
        gl.glBindTexture(GL10.GL_TEXTURE_2D, bindedTextures[0]);
        gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
        gl.glFrontFace(GL10.GL_CW);
    }

    gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
    gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);

    for(int i=0; i<parts.size(); i++){
        ModelPart modelPart = parts.get(i);
        Material material = modelPart.getMaterial();

        if(material != null){
            FloatBuffer a = material.getAmbientColorBuffer();
            FloatBuffer d = material.getDiffuseColorBuffer();
            FloatBuffer s = material.getSpecularColorBuffer();
            gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_AMBIENT, a);
            gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_SPECULAR, s);
            gl.glMaterialfv(GL10.GL_FRONT_AND_BACK, GL10.GL_DIFFUSE, d);
        }

        gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, modelPart.getTextureBuffer()); // returns the texture buffer created with the buildTextureBuffer() method
        gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
        gl.glNormalPointer(GL10.GL_FLOAT, 0, modelPart.getNormalBuffer());
        gl.glDrawElements(GL10.GL_TRIANGLES, modelPart.getFacesSize(), GL10.GL_UNSIGNED_SHORT, modelPart.getFaceBuffer());
        //gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
        //gl.glDisableClientState(GL10.GL_COLOR_ARRAY);
        gl.glDisableClientState(GL10.GL_NORMAL_ARRAY);
        gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
    }
}

When i run this application the 3d model is drawn like a charm, but the texture is somehow streched. The image that contains the texture has a red background with the appropriate image in the center and this red background is drawn onto the whole 3d model.

Well my first question is if the textureBuffer built correctly. Do i have to change the code in buildTextureBuffer() ?

And the second one; is the the draw() method correct? Does my problem have to be with the faces buffer?

  • Without proper complete inspection of your code, you seem not to have factored in the OBJ coordinates put the origin at the top left, OpenGL puts it at the bottom left. Does a quick switch to `float v = 1.0f - textures.get(texturePointers.get(i) * 2 + 1);` fix anything? – Tommy May 22 '14 at 14:52
  • @Tommy thanks for answering. No this change doesn't seem to solve my problem. Well I couldn't just paste all my code, but I can give you its repository [link](https://github.com/bakoproductions/Fossils_Viewer) in github – Michael Bakogiannis May 22 '14 at 14:56

1 Answers1

1

So, in OpenGL a vertex is whatever combination of information describes a particular point on the model. You're using the old fixed pipeline so a vertex is one or more of a location, some texture coordinates, a normal and a colour.

In OBJ a vt is only a location. The thing that maps to the OpenGL concept of a vertex is each unique combination of — in your case — location + texture coordinate pairs given after an f.

You need a mapping whereby if the f says 56/92 then you can lookup the 56/92 and find out that you consider that to be, say, vertex 23 and have communicated suitable arrays to OpenGL such that the location value in slot 23 was the 56th thing the OBJ gave as a v and the 92nd thing it gave as a vt.

Putting it another way, OBJ files have an extra level of indirection that OpenGL does not.

It looks to me like you're not resolving that difference. A common approach would be to use a HashMap from v/vt pair to output index, building your output arrays on demand as you parse the f.

Tommy
  • 99,986
  • 12
  • 185
  • 204
  • If I understand correct this HashMap value must point to a unique array that stores everything (vertices, texture coordinates and normals). I'm a newbie in OpenGl. How could this array be constructed? – Michael Bakogiannis May 22 '14 at 15:12
  • Oh, no, it can be several distinct arrays, one for locations, one for texture coordinates, etc, as long as the indices are the same. So location 54 has texture coordinate 54, etc. You'll often see it built to a single array because that helps with caching but it's something you can worry about as the next step once you've got this part working. – Tommy May 22 '14 at 15:15
  • Ok I'm starting to feel the way that things work here. So I have to create these 3 arrays by taking advantage of the HashMap, and after that the `faceBuffer` should store the values of that particular HashMap... – Michael Bakogiannis May 22 '14 at 15:24
  • 2
    Here are a couple of answers I wrote to similar questions that explain in more detail what @Tommy talks about here: http://stackoverflow.com/questions/23710829/why-is-my-obj-parser-rendering-meshes-like-this/23713424#23713424, http://stackoverflow.com/questions/23349080/opengl-index-buffers-difficulties/23356738#23356738. – Reto Koradi May 22 '14 at 16:02
  • I think the `HashMap` approach is the most natural but you don't technically *have* to use it. But, yes, you're going to tell OpenGL to draw a triangle with vertices 8, 9 and 10, then OpenGL is going to inspect those indices in your vertex buffer, texture buffer and normal buffer. So the thing at index 8 in the vertex buffer better be the location that goes with the thing at index 8 in your texture and normal buffers. OBJ doesn't arrange its data like that but a `HashMap` is an easy way to rearrange what OBJ gives you. – Tommy May 22 '14 at 16:31