I understand that a calculation such as 0.0375 + 0.0375 + 0.0375
results in 0.11249999999999999
.
If I were to continually add '0.0375' to a double variable in a for loop (x times), would you find that the result at each iteration gradually becomes more and more inaccurate due to the issues with floating point precision?
I can tolerate an approximation such as the one shown above, but can't have a result which continues to deviate from the actual result as the number of iterations increases.
EDIT:
I've run the following test (with several values):
std::cout.precision(17);
double t = 0.42300*783698177;
std::cout << std::fixed << t << std::endl;
double g = 0;
for (int y = 0; y < 783698177; y ++){
g = g + 0.42300;
}
std::cout << std::fixed << g << std::endl;
As I suspected, as the number of iterations increases, the more likely (or simply the more) the result deviates from the actual value.
Has anybody else observed this?
Although I feel I understand why this is happening, could somebody provide an intuitive (or basic mathematical) explanation for why this occurs?