0

I understand that a calculation such as 0.0375 + 0.0375 + 0.0375 results in 0.11249999999999999.

If I were to continually add '0.0375' to a double variable in a for loop (x times), would you find that the result at each iteration gradually becomes more and more inaccurate due to the issues with floating point precision?

I can tolerate an approximation such as the one shown above, but can't have a result which continues to deviate from the actual result as the number of iterations increases.

EDIT:

I've run the following test (with several values):

std::cout.precision(17);

double t = 0.42300*783698177;
    std::cout << std::fixed << t << std::endl;

double g = 0;
for (int y = 0; y < 783698177; y ++){
    g = g + 0.42300;
}

std::cout << std::fixed << g << std::endl;

As I suspected, as the number of iterations increases, the more likely (or simply the more) the result deviates from the actual value.

Has anybody else observed this?

Although I feel I understand why this is happening, could somebody provide an intuitive (or basic mathematical) explanation for why this occurs?

M-R
  • 411
  • 2
  • 6
  • 15
  • 1
    I don't think it's an issue with cout, it's an issue with how float/doubles work within computers. – Frzn Flms Feb 07 '18 at 19:41
  • Any chance you can reopen the question please? – M-R Feb 07 '18 at 20:17
  • 1
    I'm pretty sure it's the same issue. The more operations the CPU does on the floating point, the more the small mistakes build up. The CPU made a mistake on the first calculation (by 0.00...01%), and as you do more operations the mistakes builds up and it doesn't care to correct them (because it's difficult to represent it). In most cases, your application will perform as usual. Modern compilers have floating point precision levels, so you can play with those if you really need it (example: https://learn.microsoft.com/en-us/cpp/build/reference/fp-specify-floating-point-behavior) – Frzn Flms Feb 07 '18 at 22:45
  • This is also the conclusion I came to. If you write up an answer, I'll be happy to accept it. :) – M-R Feb 07 '18 at 23:14
  • @FrznFlms, how would the CPU know how to correct the error though, should it wish? Not that I need it, I have enough precision. Just wondering. – M-R Feb 07 '18 at 23:17
  • I don't think the CPU can fix the issue without jumping to 128-bits or your language's chosen Big-Floating-Point library/structure (example: https://gmplib.org). In that case, there's physically not enough information to store 0.1125, so your CPU takes the next closest thing (kinda). Sometimes, your CPU can intentionally make larger errors, like in a floating point precision level of fast,where it intentionally sacrifices accuracy for speed. But if you have it set to precise, it might just be better to allocate more memory using libraries. – Frzn Flms Feb 08 '18 at 00:18

0 Answers0