What's happening here is signed integer overflow, which is undefined behavior because the exact representation of signed integers is not defined.
In practice however, most machine use 2's complement representation for signed integers, and this particular program exploits that.
0x80000000
is an unsigned integer constant. The -
negates it, changing the expression to signed. Assuming int
is 32-bit on your system, this value still fits. In fact, it is the smallest value a signed 32-bit int
can hold, and the hexadecimal representation of this number happens to be 0x80000000
.
When adding numbers in 2's complement representation, it has the feature that you don't need to worry about the sign. They are added exactly the same way as unsigned numbers.
So when we add x
and y
, we get this:
0x80000000
+ 0x80000000
-------------
0x100000000
Because an int
on your system is 32-bit, only the lowest 32 bits are kept. And the value of those bits is 0.
Again note that this is actually undefined behavior. It works because your machine uses 2's complement representation for signed integers and int
is 32-bit. This is common for most machines / compilers, but not all.