@JDługosz 's answer is great if you're on a platform that supports that struct syntax. However, you mention OpenGL.
Here's a version that will work in a shader.
Basically, test the sign in a platform-agnostic way, set a bit for that (1 for negative, 0 for positive), add the lower four bits of the int
you want to pack, then shift the result over by five bits to make room for the next value.
Since you're dealing with values from -15 to +15, you can simplify things a bit. Rather than checking the sign, just add a constant to the value to force it to be positive. (Though, I'd recommend adding an assert
on the packing side to make sure that input values will actually fit within 4 bits.) When unpacking, subtract that constant.
TL;DR: Convert your input into a positive integer, grab the lower 5 bits, and mask/shift/add.
int pack3 (int a, int b, int c)
{
a = (a + 16) & 0x1F;
b = (b + 16) & 0x1F;
c = (c + 16) & 0x1F;
return (a << 10) | (b << 5) | c;
}
void unpack3 (int p, int &a, int &b, int &c)
{
// The 3 mask & subtraction ops could be done in one step on P, but
// I left them separate here for something resembling clarity.
c = (p & 0x1f) - 16;
b = ((p >> 5) & 0x1f) - 16;
a = ((p >> 10) & 0x1f) - 16;
}
For a shader implementation, unpack3()
will need to replace &
references with inout
or equivalent for your shader model & language.
See it working with a test driver here.