2

I have a variable that represents the XOR of 2 numbers. For example: int xor = 7 ^ 2;
I am looking into a code that according to comments finds the rightmost bit that is set in XOR:

int rightBitSet = xor & ~(xor - 1);

I can't follow how exactly does this piece of code work. I mean in the case of 7^2 it will indeed set rightBitSet to 0001 (in binary) i.e. 1. (indeed the rightmost bit set)
But if the xor is 7^3 then the rightBitSet is being set to 0100 i.e 4 which is also the same value as xor (and is not the rightmost bit set).
The logic of the code is to find a number that represents a different bit between the numbers that make up xor and although the comments indicate that it finds the right most bit set, it seems to me that the code finds a bit pattern with 1 differing bit in any place.
Am I correct? I am not sure also how the code works. It seems that there is some relationship between a number X and the number X-1 in its binary representation?
What is this relationship?

Cratylus
  • 52,998
  • 69
  • 209
  • 339

1 Answers1

3

The effect of subtracting 1 from a binary number is to replace the least significant 1 in it with a 0, and set all the less significant bits to 1. For example:

5 - 1 = 101 - 1 = 100 = 4
4 - 1 = 100 - 1 = 011 = 3
6 - 1 = 110 - 1 = 101 = 5

So in evaluating x & ~(x - 1): above x's least significant 1, ~(x - 1) has the same set bits as ~x, so above x's least significant 1, x & ~(x-1) has no 1 bits. By definition, x has a 1 bit at its least significant 1, and as we saw above ~(x - 1) will, too, but ~(x - 1) will have 0s below that point. Therefore, x & ~(x - 1) will have only one 1 bit, at the least significant bit of x.

Louis Wasserman
  • 191,574
  • 25
  • 345
  • 413
  • +1 for the first part.I did not understand the second paragraph at all.`above x's least significant 1, x & ~(x-1) has no 1 bits`. Why? – Cratylus Aug 04 '12 at 08:52
  • Rephrase: the _only_ effect of subtracting 1 from a binary number is to replace the least significant 1 in it with a 0. So `x - 1` has the same bits as `x`, above the ls1 of `x`, so `x & ~(x - 1)` is all 0s above x's ls1. – Louis Wasserman Aug 04 '12 at 09:00
  • I think I got it. So essentially we get a representation that is the least significant bit set (above and bellow bits are clear) that the numbers making up the xor defer?So the comments actually mean he least significant bit? – Cratylus Aug 04 '12 at 09:04
  • Just curious.How do you remember these stuff?Do you write code manipulating bits often?I am asking because I am always interested in what other do in case it can help me improve – Cratylus Aug 04 '12 at 09:08
  • "Do you write code manipulating bits often?" [Yes](http://docs.guava-libraries.googlecode.com/git-history/release/javadoc/src-html/com/google/common/math/IntMath.html). – Louis Wasserman Aug 04 '12 at 09:44