2

Why does this code:

#include <iostream>
int main ()
{
  int x = 1;
  int y = ~x;
  std::cout << y;
}

Always print -(x+1)? If x = 00000001, shoudn't y = 11111110?

black-goat
  • 67
  • 5
  • 3
    "If x = 00000001, shoudn't y = 11111110?" yes. But `11111110` as signed integer is -2. [Two's Complement](https://en.wikipedia.org/wiki/Two%27s_complement) – tkausl Oct 06 '16 at 19:22
  • 1
    `int` has a sign. You might want to try the same with `unsigned int` – 463035818_is_not_an_ai Oct 06 '16 at 19:23
  • @black-goat: But `y` **is** `1...11111110` in your experiment. What made you think it wasn't in the first place? `-(x+1)` is `-2`, which is `1...11111110` on a 2's-complement system. I.e. everything works exactly as you expected it to. – AnT stands with Russia Oct 06 '16 at 19:24
  • If `int` has 32 bits, `~x` is `11111111 11111111 11111111 11111110` in binary. `int` definitely has more than the 8 bits that would result in `11111110`. The logic is the same regardless of the number of bits though. –  Oct 06 '16 at 19:25

1 Answers1

3

That's because you're on a two's complement system. C++ doesn't guaratee that, but all (citation needed?) modern architectures have this property.

krzaq
  • 16,240
  • 4
  • 46
  • 61