I was wondering what happens when an 8bit value is compared against a 16bit value.
I'll try to explain the problem by a code example:
bool result;
unsigned char a_8bit = 0xcd;
unsigned short b_16bit = 0xabcd;
result = a_8bit < b_16bit;
Possible results can be:
- a_8bit is casted to unsigned short implicitly and compared to b_16bit as a 16bit value. Result is true
- b_16bit is casted to unsigned char implicitly and compared to a_8bit as an 8bit value. Result is false
Does anybody has a clue what the compiler will do with this piece of code? Sure, i can try it out, but are there different interpretations by different compilers of this code?