int i=~0;
uint j=(uint)i;
j++;
printf("%u",j);
I am slightly confused as before increment j
is "4294967295", but after increment(j++
) instead of becoming "4294967296" it is 0...can anyone please explain?
int i=~0;
uint j=(uint)i;
j++;
printf("%u",j);
I am slightly confused as before increment j
is "4294967295", but after increment(j++
) instead of becoming "4294967296" it is 0...can anyone please explain?
The range of 32-bit unsigned int
is
0 to 4,294,967,295
So incrementing beyond this value(like +1 to this value) will lead to rolling back/wrap around to 0.
Edits:
§6.2.5 Types ¶9 A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type
It's because it overflows, meaning the data type goes over the maximum value it can represent.
int i = ~0
All bits are set to 1
. For an int, this is interpreted as -1.
uint j=(uint)i;
You copy the data and convert it to unsigned int. -1 can't be represented by an unsigned int and will similar to below wrap around so it also has all its bits set to 1.
j++;
When you add by one it overflows. It's easy to see why if you look at the addition in bits. The number is only represented by a certain number of bits, on your machine a int is 32-bit. For a 4-bit number it would look like this:
1111 + 1 = 10000
But the highest order bit has nowhere to be stored, for a unsigned integer this is defined to wrap around like this:
1111 + 1 = 0000