I came across this piece of C code:
typedef int gint
// ...
gint a, b;
// ...
a = (b << 16) >> 16;
For ease of notation let's assume that b = 0x11223344
at this point. As far as I can see it does the following:
b << 16
will give0x33440000
>> 16
will give0x00003344
So, the 16 highest bits are discarded.
Why would anyone write (b << 16) >> 16
if b & 0x0000ffff
would work as well? Isn't the latter form more understandable? Is there any reason to use bitshifts in a case like this? Is there any edge-case where the two could not be the same?