2

I am studying the source code of an open source project and they have a use of the function glDrawElements which I don't understand. While being a programmer, I am quite new to the GL API so would appreciate if someone could tell me how this works.

Let's start with the drawing part. The code looks like this:

for (int i = 0; i < numObjs; i++) {

    glDrawElements(GL_TRIANGLES, vboIndexSize(i), GL_UNSIGNED_INT, (void*)(UPTR)vboIndexOffset(i));
}

vboIndiexSize(i) returns the number of indices for the current object, and vboIndexOffset returns the offset in bytes, in a flat memory array in which vertex data AND the indices of the objects are stored.

The part I don't understand, is the (void*)(UPTR)vboIndexOffset(i)). I look at the code many times and the function vboIndexOffset returns a int32 and UPTR also cast the returned value to an int32. So how you can you cast a int32 to a void* and expect this to work? But let's assume I made a mistake there and that it actually returns a pointer to this variable instead. The 4th argument of the glDrawElements call is an offset in byte within a memory block. Here is how the data is actually stored on the GPU:

int ofs = m_vertices.getSize();
for (int i = 0; i < numObj; i++)
{
    obj[i].ofsInVBO = ofs;
    obj[i].sizeInVBO = obj[i].indices->getSize() * 3;
    ofs += obj[i].indices->getNumBytes();
}

vbo.resizeDiscard(ofs);
memcpy(vbo.getMutablePtr(), vertices.getPtr(), vertices.getSize());
for (int i = 0; i < numObj; i++)
{
    memcpy(
        m_vbo.getMutablePtr(obj[i].ofsInVBO),
        obj[i].indices->getPtr(),
        obj[i].indices->getNumBytes());
}

So all they do is calculate the number of bytes needed to store the vertex data then add to this number the number of bytes needed to store the indices of all the objects we want to draw. Then they allocate memory of that size, and copy the data in this memory: first the vertex data and then the indices. One this is done they push it to the GPU using:

glGenBuffers(1, &glBuffer);
glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
checkSize(size, sizeof(GLsizeiptr) * 8 - 1, "glBufferData");
glBufferData(GL_ARRAY_BUFFER, (GLsizeiptr)size, data, GL_STATIC_DRAW);

What's interesting is that they store everything in the GL_ARRAY_BUFFER. They never store the vertex data in a GL_ARRAY_BUFFER and then the indices using a GL_ELEMENT_ARRAY_BUFFER.

But to go back to the code where the drawing is done, they first do the usual stuff to declare vertex attribute. For each attribute:

glBindBuffer(GL_ARRAY_BUFFER, glBuffer);
glEnableVertexAttribArray(loc);
glVertexAttribPointer(loc, size, type, GL_FALSE, stride, pointer);

This makes sense and is just standard. And then the code I already mentioned:

for (int i = 0; i < numObjs; i++) {

    glDrawElements(GL_TRIANGLES, vboIndexSize(i), GL_UNSIGNED_INT, (void*)(UPTR)vboIndexOffset(i));
}

So the question: even if (UPTR) actually returns the pointer to variable (the code doesn't indicate this but I may be mistaken, it's a large project), I didn't know it was possible to store all vertex and indices data with the same memory block using GL_ARRAY_BUFFER and then using glDrawElements and having the 4th argument being the offset to the first element of this index list for the current object from this memory block. I thought you needed to use GL_ARRAY_BUFFER and GL_ELEMENT_BUFFER to declare the vertex data and the indices separately. I didn't think you could declare all the data in one go using GL_ARRAY_BUFFER and can't get it to work on my side anyway.

Has anyone see this working before? I haven't got a chance to get it working as yet, and wonder if someone could just potentially tell me if there's something specific I need to be aware of to get it to work. I tested with a simple triangle with position, normal and texture coordinates data, thus I have 8 * 3 floats for the vertex data and I have an array of 3 integers for the indices, 0, 1, 2. I then copy everything in a memory block, initialize the glBufferData with this, then try to draw the triangle with:

int n = 96; // offset in bytes into the memory block, fist int in the index list
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, (void*)(&n));

It doesn't crash but I can't see the triangle.

EDIT:

Adding the code that doesn't seem to work for me (crashes).

float vertices[] = {
    0,  1, 0, // Vertex 1 (X, Y)
    2, -1, 0, // Vertex 2 (X, Y)
   -1, -1, 0, // Vertex 3 (X, Y)
    3,  1, 0,
};

U8 *ptr = (U8*)malloc(4 * 3 * sizeof(float) + 6 * sizeof(unsigned int));
memcpy(ptr, vertices, 4 * 3 * sizeof(float));
unsigned int indices[6] = { 0, 1, 2, 0, 3, 1 };
memcpy(ptr + 4 * 3 * sizeof(float), indices, 6 * sizeof(unsigned int));

glGenBuffers(1, &vbo);

glBindBuffer(GL_ARRAY_BUFFER, vbo);

glBufferData(GL_ARRAY_BUFFER, 4 * 3 * sizeof(float) + 6 * sizeof(unsigned int), ptr, GL_STATIC_DRAW);

glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);

free(ptr);

Then when it comes to draw:

glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);

// see stackoverflow.com/questions/8283714/what-is-the-result-of-null-int/
typedef void (*TFPTR_DrawElements)(GLenum, GLsizei, GLenum, uintptr_t);
TFPTR_DrawElements myGlDrawElements = (TFPTR_DrawElements)glDrawElements;

myGlDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, uintptr_t(4 * 3 * sizeof(float)));

This crashes the app.

see answer below for solution

user18490
  • 3,546
  • 4
  • 33
  • 52
  • I've explained it in depth here: http://stackoverflow.com/a/8284829/524368 – datenwolf Jul 23 '14 at 14:05
  • @datenwold, thanks and I used your solution. My question though was more about getting the idea of packing the vertex data and the indices together in the ARRAY_BUFFER to work. Which causes my program to crash right now. – user18490 Jul 23 '14 at 18:29

2 Answers2

1

This is due to OpenGL re-using fixed-function pipeline calls. When you bind a GL_ARRAY_BUFFER VBO, a subsequent call to glVertexAttribPointer expects an offset into the VBO (in bytes), which is then cast to a (void *). The GL_ARRAY_BUFFER binding remains in effect until another buffer is bound, just as the GL_ELEMENT_ARRAY_BUFFER binding remains in effect until another 'index' buffer is bound.

You can encapsulate the buffer binding and attribute pointer (offset) states using a Vertex Array Object. The address in your example isn't valid. Cast offsets with: (void *) n

Community
  • 1
  • 1
Brett Hale
  • 21,653
  • 2
  • 61
  • 90
  • Thank you for this answer. How can you cast an int to a void pointer? I don't get that? Could you please explain? Plus, I know my question is not clear, but what I'd like to know is if it is valid to use a VBO to store the vertex data AND the indices? – user18490 Jul 23 '14 at 13:36
  • A pointer is really just an integer that represents a memory address. Aside from size differences (i.e. pointers being 32 or 64 bits on different machines), there's no technical problem with casting between them; it's just dangerous because you don't want to accidentally dereference something that isn't really a valid memory address. – Wyzard Jul 23 '14 at 13:39
  • Yes makes sense, it's just odd. if this is an offset then why not passing an int. And if it needs to be a pointer to a variable, then why not doing so. Anyway, I simply can't get it to work and would still like to know if the use of GL_ARRAY_BUFFER is okay to store both vertex and index data. Thank you. – user18490 Jul 23 '14 at 13:56
  • @Wyzard: Unfortunately it's not that simple. Technically this way of casting some integer to a pointer may yield undefined behavior. The clean solution is to cast the function signature to one that accepts a uintptr_t as data argument. See http://stackoverflow.com/a/8284829/524368 – datenwolf Jul 23 '14 at 14:06
  • 2
    The signature of `glDrawElements` was defined back before there were buffer objects; originally you'd be passing an *actual* pointer to data in a [client-side vertex array](https://www.opengl.org/wiki/Client-Side_Vertex_Arrays). When device-side buffers were introduced, this function was extended to support them as well, by shoehorning a buffer offset into the address argument. – Wyzard Jul 23 '14 at 14:10
  • @datenwold@Wyzard. Thank you for the complementary info. Anyone about glDrawElements and using GL_ARRAY_BUFFER to store vertex and index data? – user18490 Jul 23 '14 at 14:12
0

Thanks for the answers. I think though that (and after doing some research on the web),

  • first you should be using glGenVertexArray. It seems that this is THE standard now for OpenGL4.x so rather than calling glVertexAttribPointer before drawing the geometry, it seems like it's best practice to create a VAO when the data is pushed to the GPU buffers.

  • I (actually) was able to make combine the vertex data and the indices within the SAME buffer (a GL_ARRAY_BUFFER) and then draw the primitive using glDrawElements (see below). The standard way anyway is to push the vertex data to a GL_ARRAY_BUFFER and the indices to a GL_ELEMENT_ARRAY_BUFFER separately. So if that's the standard way of doing it, it's probably better not to try to be too smart and just use these functions.

Example:

glGenBuffers(1, &vbo);
// push the data using GL_ARRAY_BUFFER
glGenBuffers(1, &vio);
// push the indices using GL_ELEMENT_ARRAY_BUFFER
...
glGenVertexArrays(1, &vao);
// do calls to glVertexAttribPointer
...

Please correct me if I am wrong, but that seems the correct (and only) way to go.

EDIT:

However, it is actually possible to "pack" the vertex data and the indices together into an ARRAY_BUFFER as long as a call to glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo) is done prior to calling glDrawElements.

Working code (compared with code in original post):

float vertices[] = {
    0,  1, 0, // Vertex 1 (X, Y)
    2, -1, 0, // Vertex 2 (X, Y)
   -1, -1, 0, // Vertex 3 (X, Y)
    3,  1, 0,
};

U8 *ptr = (U8*)malloc(4 * 3 * sizeof(float) + 6 * sizeof(unsigned int));
memcpy(ptr, vertices, 4 * 3 * sizeof(float));
unsigned int indices[6] = { 0, 1, 2, 0, 3, 1 };
memcpy(ptr + 4 * 3 * sizeof(float), indices, 6 * sizeof(unsigned int));

glGenBuffers(1, &vbo);

glBindBuffer(GL_ARRAY_BUFFER, vbo);

glBufferData(GL_ARRAY_BUFFER, 4 * 3 * sizeof(float) + 6 * sizeof(unsigned int), ptr, GL_STATIC_DRAW);

glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(0);

free(ptr);

Then when it comes to draw:

glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo); // << THIS IS ACTUALLY NOT NECESSARY

// VVVV THIS WILL MAKE IT WORK VVVV

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo);

// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

// see stackoverflow.com/questions/8283714/what-is-the-result-of-null-int/
typedef void (*TFPTR_DrawElements)(GLenum, GLsizei, GLenum, uintptr_t);
TFPTR_DrawElements myGlDrawElements = (TFPTR_DrawElements)glDrawElements;

myGlDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, uintptr_t(4 * 3 * sizeof(float)));
user18490
  • 3,546
  • 4
  • 33
  • 52
  • Using VAOs is required when you use the OpenGL Core Profile. Having the vertex data and indices in the same buffer is possible, and works fine, but I don't think it's very commonly done. So using separate buffers for vertex data and indices is mostly standard, and there's absolutely nothing wrong with it. – Reto Koradi Jul 23 '14 at 18:09
  • Thank you. I haven't managed to get to work though. Yes I agree why bother if there's another method (more standard maybe) but I'd have liked to get it working for the sake a proving it actually works. I think it can be interesting to have all the data in the same memory location. If you have an example of having it working I'd love to see this. It keeps crashing for me. I have added the code that crashes in the original question. – user18490 Jul 23 '14 at 18:22
  • Actually found the problem and added the solution. Thank you everyone for your contribution. – user18490 Jul 23 '14 at 18:45
  • 1
    The better approach is that you call `glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ...)` while you're setting up your VAO, where your `glVertexAttribPointer()` and other similar calls are. The element array buffer binding is part of the VAO state. If you do this, you'll only need the `glBindVertexArray()` in the draw code. – Reto Koradi Jul 23 '14 at 19:05