4

I want to see how bitwise NOT works through a simple example:

int x = 4;
int y;
int z;
y = ~(x<<1);
z =~(0x01<<1);
cout<<"y = "<<y<<endl;
cout<<"z = "<<z<<endl;

This results in y = -9 and z = -3. I don't see how this happen. Anyone can educate me a bit?

Nicholas K
  • 15,148
  • 7
  • 31
  • 57
jwm
  • 4,832
  • 10
  • 46
  • 78

3 Answers3

6

(x<<1) will shift the bits one, so

00000000 00000000 00000000 00000100

will become:

00000000 00000000 00000000 00001000

Which is the representation of 8. Then ~ will invert all the bits such that it becomes:

11111111 11111111 11111111 11110111

Which is the representation of -9.


0x01 is

00000000 00000000 00000000 00000001

in binary, so when shifted once becomes:

00000000 00000000 00000000 00000010

And then when ~ is applied we get:

11111111 11111111 11111111 11111101

Which is -3 in binary

GBlodgett
  • 12,704
  • 4
  • 31
  • 45
  • Thanks for your answer! How such a long representation is equivalent to a negative number. I thought the formula for conversion from binary to decimal is based on the sum of 2^x. Please correct me if I was wrong. – jwm Feb 11 '19 at 01:27
  • 2
    @jingweimo The Most Significant Bit (The first bit) determines if it is positive or negative. If the first bit is 1, then it is negative, and if it is 0, it is positive – GBlodgett Feb 11 '19 at 01:30
  • 4
    Negative numbers are usually represented using [two's complement](https://en.wikipedia.org/wiki/Two's_complement) instead of sign-magnitude, where you take the bitwise NOT of the positive version of the number and add one to it. So negative numbers with a small magnitude get represented by a bunch of 1's. – eesiraed Feb 11 '19 at 01:32
  • 1
    The sum-of-2^x scheme only applies to positive numbers. For negative numbers, today's convention is to effectively add 2^N (where N is the number of bits in the number including the sign, e.g. N=32 for 32-bit integers) to translate the negative value into the (high) positive range, and only then apply the sum-of-2^x scheme. The range used for positive and negative numbers is chosen so that if the 2^(N-1) bit is set it represents a negative number. – Christoph Lipka Feb 11 '19 at 02:18
2

Whether an integer is positive or negative (the sign of the integer) is stored in a dedicated bit, the sign bit. The bitwise NOT affects this bit, too, so any positive number becomes a negative number and vice versa.

Note that "dedicated bit" is a bit of an oversimplification, as most contemporary computers do not use "sign and magnitude" representation (where the sign bit would just switch the sign), but "two's complement" representation, where the sign bit also affects the magnitude.

For example, the 8-bit signed integer 00000000 would be 0, but 10000000 (sign bit flipped) would be -128.

Christoph Lipka
  • 652
  • 4
  • 15
  • Be careful with how you've worded this. It turns out that the high order bit indicates the sign, but the way you've worded it makes it sound like signed-magnitude rather than 2's complement. – David Hoelzer Feb 11 '19 at 01:25
2

Well, there is a very long story behind.
To make it easier to understand let's use binary numbers.

x = 4 or x = 0b 0000 0000 0000 0000 0000 0000 0000 0100 because sizeOf(int) = 4
after x<<1 x = 0b 0000 0000 0000 0000 0000 0000 0000 1000 and after
~(x<<1) x = 0b 1111 1111 1111 1111 1111 1111 1111 0111.

and here begin complication. Since int is signed type it's mean that the first bit is a sign and the whole system is Two complemnt.

so x = 0b 1111 1111 1111 1111 1111 1111 1111 0111 is x = -9
and for example x = 0b 1111 1111 1111 1111 1111 1111 1111 1111 is x = -1 and x = 0b 0000 0000 0000 0000 0000 0000 0000 0010 is 2

Learn more about Two complemnt.

dzuda11
  • 147
  • 11