I've been experimenting with changing values for some of the bits for field packing a byte, based on my last question: Field packing to form a single byte
However, I'm getting unexpected results based on the values. The top code sample gives me an expected output of 0x91
, however if I change colorResolution
and sizeOfGlobalColorTable
variables to: 010
, I get an unexpected output of 0x80
which isn't the binary representation of what it should be: 10100010
based from here: http://www.best-microcontroller-projects.com/hex-code-table.html. I would expect an output of: 0xA2
for the bottom code sample. What am I missing or not understanding?
This code correctly logs: 0x91
uint8_t screenDescriptorPackedFieldByte = 0;
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 001;
uint8_t screenDescriptorSortFlag = 0;
uint8_t sizeOfGlobalColorTable = 001;
screenDescriptorPackedFieldByte |= ((globalColorTableFlag & 0x1) << 7);
screenDescriptorPackedFieldByte |= ((colorResolution & 0x7) << 4);
screenDescriptorPackedFieldByte |= ((screenDescriptorSortFlag & 0x1) << 3);
screenDescriptorPackedFieldByte |= ((sizeOfGlobalColorTable & 0x7) << 0);
NSLog(@"0x%02X",screenDescriptorPackedFieldByte);
This code incorrectly logs: 0x80
uint8_t screenDescriptorPackedFieldByte = 0;
uint8_t globalColorTableFlag = 1;
uint8_t colorResolution = 010;
uint8_t screenDescriptorSortFlag = 0;
uint8_t sizeOfGlobalColorTable = 010;
screenDescriptorPackedFieldByte |= ((globalColorTableFlag & 0x1) << 7);
screenDescriptorPackedFieldByte |= ((colorResolution & 0x7) << 4);
screenDescriptorPackedFieldByte |= ((screenDescriptorSortFlag & 0x1) << 3);
screenDescriptorPackedFieldByte |= ((sizeOfGlobalColorTable & 0x7) << 0);
NSLog(@"0x%02X",screenDescriptorPackedFieldByte);