36

I like to be more standard as possible, so why should I "constrain" my classes defining it's members as OpenGL types when I can use primitive types? Is there any advantage?

JSeven
  • 625
  • 1
  • 6
  • 12

3 Answers3

55

The type "unsigned int" has a different size depending on the platform you're building on. I expect this to normally be 32 bits, however it could be 16 or 64 (or something else -- depending on the platform).

Library-specific types are often created to be typedef'd according to platform-specific rules. This allows a generic application to use the right type without having to be aware of the platform it will be built for. Instead, the platform-specific knowledge is constrained to a single common header file.

mah
  • 39,056
  • 9
  • 76
  • 93
3

i don't think it matters in this case because the spec says they are minimum sizes, not strict sizes. have a look at gl.h ~line 149 they're just typedefs of basic C types. they are just a convenience - for example there is a boolean type, so if you're using C89 and don't use any booleans then there's one set up for you to use with GL. GLuint is just a shorter way of typing unsigned int:

typedef unsigned int  GLenum;
typedef unsigned char GLboolean;
typedef unsigned int  GLbitfield;
typedef void    GLvoid;
typedef signed char GLbyte;   /* 1-byte signed */
typedef short   GLshort;  /* 2-byte signed */
typedef int   GLint;    /* 4-byte signed */
typedef unsigned char GLubyte;  /* 1-byte unsigned */
typedef unsigned short  GLushort; /* 2-byte unsigned */
typedef unsigned int  GLuint;   /* 4-byte unsigned */
typedef int   GLsizei;  /* 4-byte signed */
typedef float   GLfloat;  /* single precision float */
typedef float   GLclampf; /* single precision float in [0,1] */
typedef double    GLdouble; /* double precision float */
typedef double    GLclampd; /* double precision float in [0,1] */
  • 2
    That might be true on your particular OpenGL implementation, but once you switch to a platform where `unsigned int` is not 4 bytes your code might stop working. – ComicSansMS Nov 11 '13 at 11:08
  • um...i don't see any size preservation there at all. –  Nov 11 '13 at 12:27
  • 3
    Check the [spec (PDF)](http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf) (Table 2.2): A GLuint for example is required to be at least 32 bits in size, while a C++ `unsigned int` only needs at least 16 bits according to the ISO C++ standard. The spec points this out specifically: _GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int_. – ComicSansMS Nov 11 '13 at 12:56
  • 1
    that's interesting, but there is no protection provided by the typedef at all - it just means that it might not work properly if you use a very very very old computer. you missed the part right below your quote where it says "Correct interpretation of integer values outside the minimum range is not required, however" –  Nov 11 '13 at 13:51
  • 1
    No, the typedef is something that is provided by your platform. On a different platform, you would use a different `gl.h` with a typedef that meets the requirements. OpenGL implementations are not portable between platforms. For example, you cannot use the `gl.h` from the Windows SDK to compile on Linux. – ComicSansMS Nov 11 '13 at 14:03
  • you'd need to be using a 16-bit architecture to not meet the table of minimum requirements. –  Nov 11 '13 at 14:17
  • You suggest cross-platform compatibility should only apply to 32 bit and 64 bit architectures? That kind of misses the point... – ComicSansMS Nov 11 '13 at 14:21
  • i am not aware of any releases of opengl implementations for 16-bit devices. –  Nov 11 '13 at 14:28
  • @ComicSansMS Then the corresponding type for `GLuint` is `unsigned long` – user877329 Jul 30 '14 at 08:02
1

Better cross-platform compatibility.

karlphillip
  • 92,053
  • 36
  • 243
  • 426