Can someone explain why C's printf
is rounding down in the second case?
printf("%.03f", 79.2025); /* "79.203" */
printf("%.03f", 22.7565); /* "22.756" */
Can someone explain why C's printf
is rounding down in the second case?
printf("%.03f", 79.2025); /* "79.203" */
printf("%.03f", 22.7565); /* "22.756" */
OP's post hints at:
printf("%.03f", 79.2025); /* "79.203" */
printf("%.03f", 22.7565); /* "22.756" */
Why is one value rounding up and the other down?
Numbers like 79.2025
and 22.7565
are not exactly representable as double
on your system. Instead nearby values are encoded.
The 2 likely exact doubles
values are
79.2025000000000005684341886080801486968994140625
22.756499999999999062083588796667754650115966796875
This is due to using a binary floating point encoding. Most systems use binary floating-point although C does allow bases: 16, 10, and other powers-of 2. (I have never work on "other powers-of 2" systems.)
Printing those 2 values to the nearest 0.001 as printf("%.03f"...
directs is below, which matches OP's results.
79.203 // 79.20250000000000056... rounds up as 50000000000056... > 50000000000000...
22.756 // 22.75649999999999906... rounds down as 49999999999906... < 50000000000000...
The below is also interesting. Both 1.0625 and 1.1875 are exactly encode-able as double
. Yet one typically rounds up and the other rounds down given the usual "round ties to even" rule. Depending on various things, your output may vary, yet the below output is common.
printf("%.03f", 1.0625); /* "1.062" */
printf("%.03f", 1.1875); /* "1.188" */
Using a different precision of binary floating point types does not alter the fundamental issue: FP assigned a decimal value in code in the form of x.xxx5 rarely have the matching exact value. About 50% of them will be more than x.xxx5 and the other less.
printf("%.03f", 79.2025); /* "79.203" */
printf("%.03f", 22.7565); /* "22.756" */
3 character after point
printf("%6.3f", 79.2025); /* "79.203" */
printf("%6.3f", 22.7565); /* "22.756" */
2character + . + 3character = 6 character
This is common behaviour in floating point precision; it's not perfect, since there are a finite number of bits used to represent a value. At some point, decimal places become so small that they exceed the precision that's available for a given bit width.
Perhaps seek a higher-resolution decimal, such as a long double
.