0

I am trying to understand what is the difference between the following:

printf("%f",4.567f);
printf("%f",4.567);

How does using the f suffix change/influence the output?

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
Idan C.
  • 31
  • 6

1 Answers1

5

How using the 'f' changes/influences the output?

The f at the end of a floating point constant determines the type and can affect the value.


4.567 is floating point constant of type and precision of double. A double can represent exactly typical about 264 different values. 4.567 is not one on them*1. The closest alternative typically is exactly

4.56700000000000017053025658242404460906982421875     // best
4.56699999999999928235183688229881227016448974609375  // next best double

4.567f is floating point constant of type and precision of float. A float can represent exactly typical about 232 different values. 4.567 is not one on them. The closest alternative typically is exactly

4.566999912261962890625  // best
4.56700038909912109375   // next best float

When passed to printf() as part of the ... augments, a float is converted to double with the same value.

So the question becomes what is the expected difference in printing?

printf("%f",4.56700000000000017053025658242404460906982421875);
printf("%f",4.566999912261962890625);

Since the default number of digits after the decimal point to print for "%f" is 6, the output for both rounds to:

4.567000

To see a difference, print with more precision or try 4.567e10, 4.567e10f.

45670000000.000000 // double
45669998592.000000 // float

Your output may slightly differ to to quality of implementation issues.


*1 C supports many floating point encodings. A common one is binary64. Thus typical floating-point values are encoded as an sign * binary fraction * 2exponent. Even simple decimal values like 0.1 can not be represented exactly as such.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256