The earlier posts refer to the problem of storing numbers that are non-terminating or with a precision greater than can be stored in a double. The numbers in my example are terminating decimals that terminate at the 2nd significant digit.
I have the following C code which I have tested in IDEONE.com and in NetBeans and both give me the same output. I have also tested it on three different computers: a Dell laptop, an HP laptop, and a Dell desktop. All three were Windows 10 so I can't say the operating system isn't a factor.
When I multiply the double 2.03 by 100 and store it in an int
, the value is stored as 202 instead of 203.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv)
{
double s, t, u;
int a, b, c;
s = .99;
t = 2.03;
u = 5.00;
a = 100 * s;
b = 100 * t;
c = 100 * u;
printf("\n%.12f\t%.12f\t%.12f", s, t, u);
printf("\n%d\t%d\t%d", a, b, c);
return (EXIT_SUCCESS);
}
My output is:
0.990000000000 2.030000000000 5.000000000000
99 202 500
I'm fairly sure it has to do with the storing of the doubles and if I display to 16 decimal places, I do see a difference but I'm not confident it is reasonable to go out so many decimal places.
Does it make sense that working with such common numbers I would see this difference when I force a double
into an int
and if so, what is the recommended coding to use instead?