I've only been studying C++ (and programming for that matter) for a week, so the question may lack the understanding of fundamental programming principles, but here goes nothing:
unsigned int bitFlags()
{
unsigned char option1 = 0x01; // hex for 0000 0001
unsigned char option2 = 0x02; // hex for 0000 0010
unsigned char option3 = 0x04; // hex for 0000 0100
unsigned char option4 = 0x08; // hex for 0000 1000
unsigned char option5 = 0x10; // hex for 0001 0000
unsigned char option6 = 0x20; // hex for 0010 0000
unsigned char option7 = 0x40; // hex for 0100 0000
unsigned char option8 = 0x80; // hex for 1000 0000
unsigned char myflags; // byte-size value to hold some combination of the above 8 options
myflags |= option1|option2|option3;
if (myflags&option8)
return 1;
else
return 0;
}
int main()
{
std::cout << bitFlags() << "\n";
return 0;
}
So, I set only 3 flags (option1, option2, option3). Now, the flag query works as expected (returns 1 for options 1/2/3, and 0 for the rest) up-until option7/8. Even though option7/8 are not set, the function returns 1. Which brings me to the conclusion that unsigned char myflags looks like this in binary: 1100 0000. Well then,
1) What's happening here? Why are 2 bits already in use? How is an unsigned char using 2 bits in the first place? Shouldn't the "highest" bit be reserved only for signed variables?
2) Why do we use bitwise assignment operator |= to set bitflags when it provides unexpected results. If we simply assign myflags = option3 | option2 | option3 it works as expected - the query for option7/8 returns 0.
(There's a high probability that I have no idea what I'm talking about!)