As others have already said, passing an int
to printf
when it's expecting a double
causes undefined behaviour, and anything could happen. You might be interested in the reason why the program prints 98.979980
on the third line and not some random number.
Arguments are passed to printf
on the stack. When line 2 passes 98.98
to printf
it is pushed on the stack, with the least significant part of the number first.
Then printf
returns, and on the third line it is called again, now with 98
pushed on the stack. On your architecture the int
type seems to be 32 bits; half the size of the double
type, so this overwrites only the lower half of 98.98
that was on the stack earlier. The upper half of 98.98 is still on the stack.
Now the third call to printf
reads a double
from the stack. The most significant half of what it reads comes from the 98.98
that was on the stack earlier, and the less significant half comes from the binary representation of 98
; this is why the result is so close to 98.98
. Since 98 is such a small number its most significant bits will be 0, and setting the least significant half of 98.98
to mostly zeros gives you a smaller number.
If line 3 used a number that has more bits set to 1 you would get a result that is more than 98.98
. For example, the binary representation of -1 has all its bits set to 1, and you get:
printf("line 2: %f\n", 98.98); # 98.98
printf("line 3: %f\n", -1); # 98.980042
If the compiler used 64 bit ints, or passed double
s with the most significant part first, or used a register instead of the stack to pass parameters, you would get very different results.