The situation is the following:
- a 32bit integer overflows
- malloc, which is expecting a 64bit integer uses this integer as input
Now on a 64bit machine, which statement is correct (if any at all):
Say that the signed binary integer 11111111001101100000101011001000 is simply negative due to an overflow. This is a practical existing problem since you might want to allocate more bytes than you can describe in a 32bit integer. But then it gets read in as a 64bit integer.
Malloc
reads this as a 64bit integer, finding11111111001101100000101011001000################################
with # being a wildcard bit representing whatever data is stored after the original integer. In other words, it read a result close to its maximum value 2^64 and tries to allocate some quintillion bytes. It fails.Malloc
reads this as a 64bit integer, casting to0000000000000000000000000000000011111111001101100000101011001000
, possibly because it is how it is loaded into a register leaving a lot of bits zero. It does not fail but allocates the negative memory as if reading a positive unsigned value.Malloc
reads this as a 64bit integer, casting to################################11111111001101100000101011001000
, possibly because it is how it is loaded into a register with # a wildcard representing whatever data was previously in the register. It fails quite unpredictably depending on the last value.- The integer does not overflow at all because even though it is 32bit, it is still in a 64bit register and therefore malloc works fine.
I actually tested this, resulting in the malloc failing (which would imply either 1 or 3 to be correct). I assume 1 is the most logical answer. I also know the fix (using size_t as input instead of int).
I'd just really want to know what actually happens. For some reason I don't find any clarification on how 32bit integers are actually treated on 64bit machines for such an unexpected 'cast'. I'm not even sure if it being in a register actually matters.