1

Until very recently, I thought that:

  • printf("%f",x) would attempt to read a 4-byte floating-point value from the stack.

  • printf("%lf",x) would attempt to read an 8-byte floating-point value from the stack.

However, the following piece of code seems to yield the correct output:

float  f = 1234.5;
double d = 12345678.9;
printf("Output = %u %u\n",sizeof(f),sizeof(d)); // Output = 4 8
printf("Output = %.1f %.1f\n",f,d);             // Output = 1234.5 12345678.9

I believe that it proves that printf("%f",x) reads an 8-byte floating-point value from the stack.

This is because both values are printed correctly, yet 12345678.9 is too large to fit into a 4-byte variable.

Now my question is: when calling printf with a 4-byte float variable, how does the compiler know that it should cast it to an 8-byte double value before pushing it into the stack and calling printf?

Is that a part of the standard when calling functions that take floating-point arguments?

Thanks

barak manos
  • 29,648
  • 10
  • 62
  • 114
  • 4
    http://stackoverflow.com/questions/6395726/how-does-printf-and-co-differentiate-beetween-float-and-double It is a va_args issue. – this Jan 29 '14 at 21:37
  • Oh... so this is the case **only** when passing a `float` variable to a function that takes a variable number of arguments (via `vararg`) such as `printf`? – barak manos Jan 29 '14 at 21:39
  • Yes, you can pass char and float, but you have to retrieve them as int and double. – this Jan 29 '14 at 21:40
  • OK, I have realized it in the past about `char` and `short`, but figured it was a "rounding up to a multiple of 4" that would take place for **every** function call (hence assumed that `char/short/int/float` would all be passed as 4-byte arguments, regardless of the `vararg` issue)... Thanks for the info :) – barak manos Jan 29 '14 at 21:44
  • 4
    Since you know that `printf` arguments must match the format string, use either `…%zu…", sizeof(…)` or `…%u…", (unsigned int)sizeof(…)`. Printing the result of `sizeof` with `%u` can fail because `size_t` does not have to be `unsigned int`. – Pascal Cuoq Jan 29 '14 at 21:48
  • Thanks for the info... BTW, it will just print the wrong output (won't cause any memory access violations or anything) – barak manos Jan 29 '14 at 22:18
  • @barakmanos: well, maybe. The behavior is undefined, so it actually could do *anything*. I prefer "use your credit card to order a burrito for a randomly chosen compiler engineer" as the default. – Stephen Canon Jan 29 '14 at 23:25

1 Answers1

7

how does the compiler know that it should cast it to an 8-byte double value before pushing it into the stack and calling printf?

It doesn't have to know when to do it - it always does; it's part of the default argument promotions.

Is that a part of the standard when calling functions that take floating-point arguments?

No, just variadic ones.

Community
  • 1
  • 1
Carl Norum
  • 219,201
  • 40
  • 422
  • 469