When I was programming a little ODE-solver for which I needed a time array, I ended up with the following really strange behavior. Following code should clearly reproduce the problem.
#include <iostream>
using namespace std;
int main() {
double t = 0.0;
for (t = 0.0; t <= 1.00; t += 0.01) {
cout << t << " ";
}
cout << endl;
cout << "t = "<< t << " and t <= 1.00? " << (t <= 1.00) << endl;
double c = 1.00;
cout << "c = "<< c << " and c <= 1.00? " << (c <= 1.00) << endl;
cout << "t == 1.00? " << (t == 1.00) << " and c == 1.00? " << (c == 1.00) << endl;
return 0;
}
This gives the following output:
0 0.01 0.02 0.03 0.04 ... 0.97 0.98 0.99
t = 1 and t <= 1.00? 0
c = 1 and c <= 1.00? 1
t == 1.00? 0 and c == 1.00? 1
My question is: why does (t <= 1.00) and (t == 1.00) returns false while t should clearly be equal to 1 and t is of the type double?
I cannot really avoid this problem cause in my real code my t-step is not hard coded etc...
Thank you on beforehand.
Edit: thank you for the answers. It's indeed the problem of the binary representations of 0.01 etc not being exact but with a certain rounding error. The t scale was really enforced by the other physical quantities in the program. Another answer/hint I got elsewhere in the meantime was to always work with a tolerance if working with floating point members is needed. In this case "t <= 1.00" could become "t < 1.00 + 0.01/2" or more general "t < 1.00 + h/2" when working with a variable precision h as was needed in my application.