I am trying to understand the meaning of the statement:
(int)(unsigned)-1 == -1;
To my current understanding the following things happen:
-1
is a signedint
and is casted to unsignedint
. The result of this is that due to wrap-around behavior we get the maximum value that can be represented by theunsigned
type.Next, this
unsigned
type maximum value that we got in step 1, is now casted tosigned int
. But note that this maximum value is anunsigned type
. So this is out of range of thesigned type
. And since signed integer overflow is undefined behavior, the program will result in undefined behavior.
My questions are:
- Is my above explanation correct? If not, then what is actually happening.
- Is this undefined behavior as i suspected or implementation defined behavior.
PS: I know that if it is undefined behavior(as opposed to implementation defined) then we cannot rely on the output of the program. So we cannot say whether we will always get true
or false
.