There are some misconceptions here, all of which can be fixed by revisiting first principles:
A number is a number.
When you write the decimal literal 42
, that is the number 42.
When you write the hexadecimal literal 0x2A
, that is still the number 42.
When you assign either of these expressions to an int
, the int
contains the number 42.
A number is a number.
Which base you used does not matter. It changes nothing. Writing a literal in hex then assigning it to an int
does not change what happens. It does not magically make the number be interpreted or handled or represented any differently.
A number is a number.
What you've done here is assign 0xFFFFFFE2
, which is the number 4294967266, to myInt
. That number is larger than the maximum value of a [signed] int
on your platform, so it overflows. The results are undefined, but you happen to be seeing a "wrap around" to -30, probably due to how two's complement representation works and is implemented in your computer's chips and memory.
That's it.
It's got nothing to do with hexadecimal, so there's no "choice" to be made between hex and decimal literals. The same thing would happen if you used a decimal literal:
myInt = 4294967266;
Furthermore, if you were looking for a way to "trigger" this wrap-around behaviour, don't, because the overflow has undefined behaviour.
If you want to manipulate the raw bits and bytes that make up myInt
, you can alias it via a char*
, unsigned char*
or std::byte*
, and play around that way.