Many sources recommend using 16-bit vertex/texture coordinates, but all the example code I've seen relies on 32-bit floats.
I've read the extension for 16-bit vertex coordinates, but it doesn't provide any examples of how it can be used.
Many sources recommend using 16-bit vertex/texture coordinates, but all the example code I've seen relies on 32-bit floats.
I've read the extension for 16-bit vertex coordinates, but it doesn't provide any examples of how it can be used.
16-bit vertex/texture coordinates have been in OpenGL (and ES) since the beginning, without the need for extensions. You can provide the coordinates with GL_SHORT
type, which is a signed 16 bit number. You can have the values provided normalized ([-1..1]), if you pass normalized=true
when providing them via glVertexAttribPointer
. You can then scale them within your vertex shader accordingly (whether they are normalized or not).
See this question for more information.
The GLM Library GLM library supports half-float types. The prefix used is 'h' so where glm::vec3
is a 3 element vector of floating points values, glm::hvec3
is a 3 element vector of half-floats.
And you also need something like glVertexAttribPointer(..., ..., GL_HALF_FLOAT, GL_FALSE, ..., ...);
See this thread 16-bit floats and GL_HALF_FLOAT
and Small float formats where they say
Half floats
32-bit floats are often called "single-precision" floats, and 64-bit floats are often called "double-precision" floats. 16-bit floats therefore are called "half-precision" floats, or just "half floats".
OpenGL supports the use of half floats in Image Formats, but it also allows them to be used as Vertex Attributes by setting the format component type to GL_HALF_FLOAT.