0

I have a simple code that involves printing a floating point number.

But I get different output results when printing the same number between float and double data types.

Upon further investigation I found: precision width specifier rounds the output result, and when passing float arguments to printf() , they will first converted to double type.

This even adds to confusion.

so in following code snippet:

float f = 0.07 * 13.5; // 0.945000
double d = 0.07 * 13.5 // 0.945000

printf("%f %.2f %g\n", f, f, f);
printf("%lf %.2lf %g\n", d, d, d);
printf("%f %.2f %g\n", d, d, d);

prints:
0.945000 0.94 0.945
0.945000 0.95 0.945
0.945000 0.95 0.945

Why the result is different between float and double? It's not an unusual or big number and even float precision is more than enough for it.

Thank you.

user174174
  • 119
  • 1
  • 5
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackoverflow.com/rooms/249513/discussion-on-question-by-user174174-weird-behavior-of-printf-in-float-vs-doub). – Dharman Nov 11 '22 at 12:46

0 Answers0