I am trying to understand the difference between using GL_INT_2_10_10_10_REV
for the normal of my data V.S. GLbyte
. Currently I'm loading my normals into a glm::vec4
and I'm packing it like so:
int32_t floatToSignedNormalizedByte(float x, float y, float z, float w)
{
int32_t result = 0;
const int16_t maxValue = static_cast<int16_t>(std::numeric_limits<int8_t>::max());
const int16_t negativeValueScale = maxValue + 1;
result |= static_cast<int8_t>(x < 0 ? x * negativeValueScale : x * maxValue);
result |= static_cast<int8_t>(y < 0 ? y * negativeValueScale : y * maxValue) << 8;
result |= static_cast<int8_t>(z < 0 ? z * negativeValueScale : z * maxValue) << 16;
result |= static_cast<int8_t>(w < 0 ? w * negativeValueScale : w * maxValue) << 24;
return result;
}
With the packed normal, I would then call:
//Location, component count, type, normalized, stride, offset
glVertexAttribPointer(location, 4, GL_BYTE, true, format.getVertexSize(), format.getFieldOffset(field, dataBasePtr));
My question is, given the way I'm packing my normal, should I be using GL_INT_2_10_10_10_REV
as my type (replacing GL_BYTE)? I understand using GL_INT_2_10_10_10_REV
means each component gets 10bits instead of 8 and that's fine since really I only need the xyz component. Which one is better and why? If I use GL_INT_2_10_10_10_REV
, I am guessing component count is still 4?