0

I understand that when I am increasing precision I should get more precise result. But that is not quite clear from this example where I have increased the precision, but I am not getting correct result:

#include <iostream>
#include <iomanip>
int main()
{
    using namespace std;
    cout << setprecision(17);
    double dValue;
    dValue = 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1;
    cout << dValue << endl;
}

The output is 0.99999999999999989

Can someone explain me why is this happening ?

Ryan
  • 14,392
  • 8
  • 62
  • 102
user3891236
  • 607
  • 1
  • 9
  • 23
  • 2
    The programming language you are using is C++. – Pascal Cuoq Dec 30 '14 at 12:06
  • Look into IEEE 754 standard, http://en.wikipedia.org/wiki/Double-precision_floating-point_format, computer doesn't see something like 0.1. It is the nearest binary number (for simple blueprint example 2^-3 = 0.128d). When you make calculations on floating point numbers you get not accurate results. – Nabuchodonozor Dec 30 '14 at 12:20

1 Answers1

0

Increasing precision does not produce a more precise result. It simply formats the output to the number of decimal places you want to show. The "error" you're seeing has more to do with the computer's inability to accurately represent 0.1 to the level of precision that you're looking for.

Rubix Rechvin
  • 571
  • 3
  • 16
  • Why this inability when I can print 1000000 digit numbers on the screen. Why no inability comes there ? – user3891236 Dec 30 '14 at 12:12
  • @user3891236 Please read the provided links above, and http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html. Why do you think the character count on your screen has something to do with how many bit a variable has? Or try some binary calculation: Your decimal 0.1 is a binary number of *infinte* length... – deviantfan Dec 30 '14 at 12:14