I have a variable that represents the XOR of 2 numbers. For example: int xor = 7 ^ 2;
I am looking into a code that according to comments finds the rightmost bit that is set in XOR:
int rightBitSet = xor & ~(xor - 1);
I can't follow how exactly does this piece of code work. I mean in the case of 7^2
it will indeed set rightBitSet
to 0001
(in binary) i.e. 1. (indeed the rightmost bit set)
But if the xor
is 7^3
then the rightBitSet
is being set to 0100
i.e 4
which is also the same value as xor
(and is not the rightmost bit set).
The logic of the code is to find a number that represents a different bit between the numbers that make up xor
and although the comments indicate that it finds
the right most bit set, it seems to me that the code finds a bit pattern with 1 differing bit in any place.
Am I correct? I am not sure also how the code works. It seems that there is some relationship between a number X
and the number X-1
in its binary representation?
What is this relationship?