Until very recently, I thought that:
printf("%f",x)
would attempt to read a 4-byte floating-point value from the stack.printf("%lf",x)
would attempt to read an 8-byte floating-point value from the stack.
However, the following piece of code seems to yield the correct output:
float f = 1234.5;
double d = 12345678.9;
printf("Output = %u %u\n",sizeof(f),sizeof(d)); // Output = 4 8
printf("Output = %.1f %.1f\n",f,d); // Output = 1234.5 12345678.9
I believe that it proves that printf("%f",x)
reads an 8-byte floating-point value from the stack.
This is because both values are printed correctly, yet 12345678.9 is too large to fit into a 4-byte variable.
Now my question is: when calling printf
with a 4-byte float
variable, how does the compiler know that it should cast it to an 8-byte double
value before pushing it into the stack and calling printf
?
Is that a part of the standard when calling functions that take floating-point arguments?
Thanks