1

I am trying to understand the difference between using GL_INT_2_10_10_10_REV for the normal of my data V.S. GLbyte . Currently I'm loading my normals into a glm::vec4 and I'm packing it like so:

int32_t floatToSignedNormalizedByte(float x, float y, float z, float w)
{
    int32_t result = 0;
    const int16_t maxValue = static_cast<int16_t>(std::numeric_limits<int8_t>::max());
    const int16_t negativeValueScale = maxValue + 1;

    result |= static_cast<int8_t>(x < 0 ? x * negativeValueScale : x * maxValue);
    result |= static_cast<int8_t>(y < 0 ? y * negativeValueScale : y * maxValue) << 8;
    result |= static_cast<int8_t>(z < 0 ? z * negativeValueScale : z * maxValue) << 16;
    result |= static_cast<int8_t>(w < 0 ? w * negativeValueScale : w * maxValue) << 24;

    return result;
}

With the packed normal, I would then call:

//Location, component count, type, normalized, stride, offset
glVertexAttribPointer(location, 4, GL_BYTE, true, format.getVertexSize(),  format.getFieldOffset(field, dataBasePtr));

My question is, given the way I'm packing my normal, should I be using GL_INT_2_10_10_10_REV as my type (replacing GL_BYTE)? I understand using GL_INT_2_10_10_10_REV means each component gets 10bits instead of 8 and that's fine since really I only need the xyz component. Which one is better and why? If I use GL_INT_2_10_10_10_REV, I am guessing component count is still 4?

ChaoSXDemon
  • 860
  • 10
  • 29
  • @NicolBolas, shouldn't normals be signed? Reading the specification, for signed normalized integer, I can either choose the range of [-128, 127] for 8 bits or [-127, 127]. It is said that the first one if more suitable for vertex while the second is more for texture. I know the second one can express 0 exactly while the first cannot. I thought that isn't a huge issue. What did I do wrong? Please help me :) – ChaoSXDemon Nov 27 '17 at 19:03
  • Okay I'm just talking to meself :( – ChaoSXDemon Nov 27 '17 at 19:03
  • I deleted the old comment because it was wrong. However, I found another potential problem: [that your code is right for OpenGL 4.1 and below, but wrong for 4.2 and above](https://www.khronos.org/opengl/wiki/Normalized_Integer#Signed). But that aside, I'm not sure exactly what your question is asking. Are you asking how to use 2_10_10_10 as a format, or whether you should or what? – Nicol Bolas Nov 27 '17 at 19:07
  • @NicolBolas, I'm asking how to use the 2_10_10_10, if I should use it for normals and how to pack it correctly. I find it difficulty to know the order of packing ... without the `REV` flag. For the way I'm packing it, what should the order be? `REV` clearly indicates xyzw should be packed as -> wzyx from high bit to low bit (left to right). – ChaoSXDemon Nov 27 '17 at 19:14

1 Answers1

2

Why

As you said, using 10_10_10_2 will give you 10 bits per meaningful component (xyz) while leaving 2 bits for the w component which is useless for normal vectors. With 10 bits you have 2^10 possible discrete values for x, y and z instead of the 2^8 values provided by 8 bits. This will therefore give you more accurate values for the normals and smoother gradients. You could also use floating-point values, but these would require more memory and would likely be slower. This answers the "why should I do this" part.

How

As for the "how", your current packing function transforms each float into an 8 bit integer. You would need to change this to transform the x, y and z components in 10 bit integers and pack those bits together with 01 as the last 2 bits (the w component). The REV version simply allows you to specify the components in the reverse order (wzyx).

More info about GL_INT_2_10_10_10_REV vertex format

Tables 10.3 and 10.4 of the OpenGL specification describe how these components are laid out in a 32-bit word:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
|  w |             z               |              y              |         x         |

Table 10.3: Packed component layout for non-BGRA formats. Bit numbers are indicated for each component.

UNSIGNED_INT_2_10_10_10_REV:
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
|  w |             x               |              y              |         z         |

Table 10.4: Packed component layout for BGRA format. Bit numbers are indicated for each component.

As you can see, there are 4 components in this vertex format.

You might also find the specification of GL_ARB_vertex_type_2_10_10_10_rev an interesting read if you want more background:

Two new vertex attribute data formats: a signed 2.10.10.10 and an unsigned 2.10.10.10 vertex data format. These vertex data formats describe a 4 component stream which can be used to store normals or other attributes in a quantized form. Normals, tangents, binormals and other vertex attributes can often be specified at reduced precision without introducing noticeable artifacts, reducing the amount of memory and memory bandwidth they consume.

The "reduced precision" refers to using 10 bit integers instead of half float (16 bits) or float (32 bits) values for normals. Having fewer bits per vertex can provide better performance because the vertex assembly stage needs less memory bandwidth.

This question also has relevant info: Using GL_INT_2_10_10_10_REV in glVertexAttribPointer()

bernie
  • 9,820
  • 5
  • 62
  • 92
  • So the type should be GL_INT_2_10_10_10_REV and component count is still 4? What if I want to continue to use `2^8` packing? Should their order be the same with w being the highest bit and x being the lowest bit? – ChaoSXDemon Nov 27 '17 at 22:14