Assuming that 0xFFF
is a 12 bit number expressed with two's complement representation (not necessarily the case), that is equivalent to -1
and assuming that our CPU also uses 2's complement (extremely likely), then:
Using small integer types such as (unsigned) char
or short
inside bitwise operations is dangerous, because of implicit type promotion. Assuming 32 bit system with 16 bit short
, if such a variable (signed or unsigned) is handed to the left operand of a shift, it will always get promoted to (signed) int
.
Under the above assumptions, then:
U12<<4
gives a result 0xFFF0
of type int
which is signed. You then convert it to unsigned short upon assignment.
- The conversion
*(short*)&XT
is smelly but allowed by the pointer aliasing rules in C. The contents of the memory is now re-interpreted as the CPU's signed format.
the_signed_short >> 4
invokes implementation-defined behavior when the left operator is negative. It does not necessarily result in an arithmetic shift as you expect. It could as well be a logical shift.
%X
and %d
expect unsigned int
and int
respectively, so passing a short is wrong. Here you get saved by the mandatory default promotion of a variadic function's argument, in this case a promotion to int
, again.
So overall there's a lot of code smell here.
A better and mostly well-defined way to do this on the mentioned 32 bit system is this:
int32_t u12_to_i32 (uint32_t u12)
{
u12 &= 0xFFF; // optionally, mask out potential clutter in upper bytes
if(u12 & (1u<<11)) // if signed, bit 11 set?
{
u12 |= 0xFFFFFFu << 12; // "sign extend"
}
return u12; // unsigned to signed conversion, impl.defined
}
All bit manipulations here are done on an unsigned type which will not get silently promoted on a 32 bit system. This method also have the advantage of using pure hex bit masks and no "magic numbers".
Complete example with test cases:
#include <stdint.h>
#include <stdio.h>
#include <inttypes.h>
int32_t u12_to_i32 (uint32_t u12)
{
u12 &= 0xFFF; // optionally, mask out potential clutter in upper bytes
if(u12 & (1u<<11)) // if signed, bit 11 set?
{
u12 |= 0xFFFFFFu << 12; // "sign extend"
}
return u12; // unsigned to signed conversion, impl.defined
}
int main (void)
{
uint32_t u12;
int32_t i32;
u12=0; i32 = u12_to_i32(u12);
printf("%08"PRIX32 "-> %08"PRIX32 " = %"PRIi32 "\n", u12, (uint32_t)i32, i32);
u12=0x7FF; i32 = u12_to_i32(u12);
printf("%08"PRIX32 "-> %08"PRIX32 " = %"PRIi32 "\n", u12, (uint32_t)i32, i32);
u12=0x800; i32 = u12_to_i32(u12);
printf("%08"PRIX32 "-> %08"PRIX32 " = %"PRIi32 "\n", u12, (uint32_t)i32, i32);
u12=0xFFF; i32 = u12_to_i32(u12);
printf("%08"PRIX32 "-> %08"PRIX32 " = %"PRIi32 "\n", u12, (uint32_t)i32, i32);
return 0;
}
Output (gcc x86_64 Linux):
00000000-> 00000000 = 0
000007FF-> 000007FF = 2047
00000800-> FFFFF800 = -2048
00000FFF-> FFFFFFFF = -1