3

My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory

But if this is true why do I not need to dereference anything when using these variables?

genpfault
  • 51,148
  • 11
  • 85
  • 139
Evansnye_
  • 33
  • 3
  • 5
    These values are *not* pointers, and thus should not be dereferenced. How they are mapped to "objects" in OpenGL is a mystery black box which we don't really need to know about. – Some programmer dude Oct 12 '21 at 15:50
  • 2
    OpenGL is holding "handles" (indexes to internal data structures) in `GLuint`s - they could reference something in the OpenGL driver or on the Video card etc. They should not be "de-referenced" or otherwise manipulated. The OpenGL driver knows what these "handles" represent and they should only be passed to the correct API calls. If you pass them to incorrect API calls then you break the API contract. – Richard Critten Oct 12 '21 at 15:50
  • Short answer: it is not true. – molbdnilo Oct 12 '21 at 15:56
  • 1
    https://en.wikipedia.org/wiki/Handle_(computing) – genpfault Oct 12 '21 at 16:09

1 Answers1

6

OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.

This dichotomy exists for legacy historical reasons.

The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.

This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.

The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.

As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.

Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).

Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.

And the path of least resistance for that was to not create a new API, even if it makes the API really silly.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982