4

This question asks wether one can rely on the compiler to not mess with a structs values order and padding.

According to the answer to that question,

OpenGL defines, very clearly, what the byte layout of a std140 interface block is.

C++11 defines a concept called "standard layout types".

The only things C++ tells you about standard layout types with regard to layout is that empty base classes are ignored (so long as it remains standard layout) and that the first NSDM will be at the very beginning of the class. That is, there will never be padding at the front.

The other thing the standard says is that NSDMs of the same access class will be allocated in order, with later ones having larger offsets than earlier ones.

But that's it, as far as the C++ standard is concerned. [class.mem]/13 states that implementations can add padding between members for various reasons.

That possible but not always present padding can really mess things up, and the worst part - it depends on the compiler.


To avoid bugs and nightmares, isn't it better to use a compiler-agnostic approach?

For example:

class BufferData
{
private:
    GLfloat data[12];

public:
    GLfloat* getCameraPosition()
    {
        return (GLfloat*) &data[0];
    }
    GLfloat* getLightPosition()
    {
        return (GLfloat*) &data[4];
    }
    GLfloat* getLightDiffuse()
    {
        return (GLfloat*) &data[8];
    }
    GLfloat* getData()
    {
        return data;
    }
};

As opposed to the naive:

struct BufferData
{
    GLfloat camera_position[4];
    GLfloat light_position[4];
    GLfloat light_diffuse[4];
};

Or is the naive approach good enough?

(Let's suppose that the class/struct has more than just that, and might change)

Community
  • 1
  • 1
Ivan Rubinson
  • 3,001
  • 4
  • 19
  • 48

1 Answers1

1

"compiler-agnostic"? There's no such animal. And your attempt to write one proves it. Consider your struct member definition:

GLfloat data[12];

That requires the existence of a GLfloat type. But the thing is, C++ does not define that type. OpenGL does.

OpenGL defines that type very clearly: it is a IEEE-754 floating-point type, using the BINARY32 format.

The thing is, C++ does not require that float conform to that. Indeed, C++ doesn't require that any of its types conform to that. If a compiler wants to have float use something other than IEEE-754, that's just fine.

Now, you may say that the OpenGL header could define GLfloat to be a class type, 32-bits in size, that will convert from the compiler's float type to IEEE-754. Sure, that could happen... unless there's no way to have a 32-bit value.

There are systems out there with 9-bit bytes. Or 18-bit bytes. There are C++ compilers for these systems. Such systems cannot declare a type that is only 32-bits in size.

But being able to pass 32-bit values (not to mention 16-bit and 8-bit) is a hard requirement of OpenGL. You would not be able to pass any data in a buffer object without that. And yet, C++ doesn't require it.

Speaking of vertex data, one of the most basic functions in semi-modern OpenGL is glVertexAttribPointer. And it relies on you casting a byte offset into a void*, which it will then cast back.

C++ doesn't guarantee that this works. Nowhere in the C++ standard does it require that if you cast an integer into a pointer, then cast that pointer back into an integer, you'll get the same integer back (it does say that ptr->int->ptr works, but that doesn't imply the reverse).

And yet OpenGL requires it. Unless you use separate attribute buffers (and I strongly suggest you do if it's available), your code, and the OpenGL code you call, relies on this undefined behavior.

OpenGL defines GLint as being a signed, two's complement, 32-bit integer. But C++ does not require any integer type to be two's complement.

But OpenGL does.

OpenGL flat out cannot operate on a system that has unusual sizes for types. It cannot operate on a system with 9-bit bytes. It cannot operate on a system that uses one's complement for signed integer math. I could keep going with this, but I think my point is clear.

By choosing to use OpenGL at all (Vulkan too, if you're wondering), you are already relying on implementation-defined behavior. So why bother making your life more difficult to avoid this specific bit of implementation-defined behavior, when you're already relying on tons of other implementation-defined behavior?

The horse is out of the barn; closing the doors now ain't helping.

Community
  • 1
  • 1
Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • Alright. What about a compiler-agnostic solution as long as the size of types aren't unusual (are power of 2)? – Ivan Rubinson Jul 20 '16 at 05:19
  • 1
    @IvanRubinson: So I presume you're going to stop using `glVertexAttribPointer` and any similar function that pretends a pointer is a byte offset? My point is that you're already relying on plenty of other implementation-defined behavior. Why is it so important to be "compiler-agnostic" on struct layout? The layout of types is generally the *easiest* thing to check (since it's generally defined by platform ABIs). – Nicol Bolas Jul 20 '16 at 05:42
  • You're raising valid points. Isn't "pointer is byte offset" a normal thing to rely on when programming in C? – Ivan Rubinson Jul 20 '16 at 06:17
  • @IvanRubinson: "*Isn't "pointer is byte offset" a normal thing to rely on when programming in C?*" I'm no expert on C programming, but no, it isn't. I don't think I've ever seen a C API other than OpenGL which genuinely requires that you convert an integer into a pointer just to pass it to someone who will convert it back into an integer. Now, I do know of a few C APIs that rely on converting pointers to integers and back, but that's actually *guaranteed* by the standard to work (so long as the integer type is big enough). – Nicol Bolas Jul 20 '16 at 06:33