We have this input field where the user is supposed to enter a percentage. We display the percentage with 1 decimal. If the user choose to enter 31.45 we noticed that the float representation is 31.44999999999 which is correctly formatted as 31.4, but proper math rounding rules says the original numeric value should be rounded to 31.5.
So I looked for similar questions but most answers says that the string formatters handle should handle this case correctly. However it does not for me.
I tried the following C-code:
#include <math.h>
#include <stdio.h>
int main(int argc, char* argv[])
{
float x = 31.45;
printf("%.20f\n", x);
printf("%.3f\n", x);
printf("%.2f\n", x);
printf("%.1f\n", x);
printf("%.0f\n", x);
double y = 31.45;
printf("%.20f\n", y);
printf("%.3f\n", y);
printf("%.2f\n", y);
printf("%.1f\n", y);
printf("%.0f\n", y);
return 0;
}
The output:
31.45000076293945312500
31.450
31.45
31.5
31
31.44999999999999928946
31.450
31.45
31.4
31
Note that float handles it correctly, while double doesn't!
I tried the same thing in Python:
>>> "%.1f" % 31.45
'31.4'
I tried with Wolfram Alpha: http://www.wolframalpha.com/input/?i=round+31.45+to+nearest+.1: same result.
NSNumberFormatter in Obj-C also has the same problem.
Excel handles it correctly though! 31.45 is formatted as 31.5 when using one decimal.
I'm not sure what to think of this. I was imagining that the string formatters would be smart enough to handle the limitations of floating point values, but it seems most do not. Any hints how to handle this in a generic way? Also, is there any logic to why floats behave better than double in this case, or is it just "luck"?