I am working on an application that calculates ppm and checks if it is greater than a certain threshold. I recently found out the precision error of floating point calculation.
double threshold = 1000.0;
double Mass = 0.000814;
double PartMass = 0.814;
double IncorrectPPM = Mass/PartMass * 1000000;
double CorrectedPPM = (double)((decimal)IncorrectPPM);
Console.WriteLine("Mass = {0:R}", Mass);
Console.WriteLine("PartMass = {0:R}", PartMass);
Console.WriteLine("IncorrectPPM = {0:R}", IncorrectPPM);
Console.WriteLine("CorrectedPPM = {0:R}", CorrectedPPM);
Console.WriteLine("Is IncorrectPPM over threshold? " + (IncorrectPPM > threshold) );
Console.WriteLine("Is CorrectedPPM over threshold? " + (CorrectedPPM > threshold) );
The above codes would generate the following outputs:
Mass = 0.000814
PartMass = 0.814
IncorrectPPM = 1000.0000000000002
CorrectedPPM = 1000
Is IncorrectPPM over threshold? True
Is CorrectedPPM over threshold? False
As you could see, the calculated ppm 1000.0000000000002
has a trailing 2
which causes my application to falsely judge that the value is over the 1000
threshold. All inputs to the calculation are given to me as double values so I couldn't use decimal calculation. In addition, I couldn't round the calculated value since it could cause the threshold comparison to be incorrect.
I noticed that if I cast the calculated double number into decimal and then cast it back to double again the 1000.0000000000002
number got corrected into 1000
.
Question:
Does anyone know how the computer know in this case that it should change the 1000.0000000000002
value to 1000
when casting to decimal?
Can I rely on this trick to avoid the precision issue of double calculation?