3

I have learned from wikipedia that a double number has at most 15-17 significant decimal digits

However, for the simple C++ program below

double x = std::pow(10,-16);
std::cout<<"x="<<std::setprecision(100000)<<x<<std::endl;

(To test it, use this online shell ), I get

x=9.999999999999999790977867240346035618411149408467364363417573258630000054836273193359375e-17

which has 88 significant decimal digits, which, apparently, contradicts with the aforementioned claim from Wiki. Can anyone clarify should I misunderstand something? Thanks.

zell
  • 9,830
  • 10
  • 62
  • 115
  • Most of those digits are *not* significant; most of them don't help distinguish this value from the next biggest or smallest possible `double` value. – Oliver Charlesworth Jan 03 '15 at 19:12
  • 1
    @David: I voted to reopen (not realising that I have the casting vote due to my gold badge...); I believe that this isn't [the standard "why is FP broken?"-type question](http://stackoverflow.com/questions/588004/is-floating-point-math-broken); it's asking about the apparent discrepancy between standard precision claims and the behaviour of `setprecision`. – Oliver Charlesworth Jan 03 '15 at 19:14
  • 1
    @OliverCharlesworth I believe it's the same old question which is, as always, about representability – David Heffernan Jan 03 '15 at 19:18

1 Answers1

6

There is no contradiction. As you can see, the value of x is incorrect at the first 7 in its decimal expansion; I count 16 correct digits before that. std::setprecision doesn't control the precision of the inputs to std::cout, it simply displays as many digits as you request. Perhaps std::setprecision is poorly named, and should be replaced by std::displayprecision, but std::setprecision is doing its job. From a linguistic perspective, think of std::setprecision as setting the precision of std::cout, and not attempting to control the precision of the arguments to std::cout.

user14717
  • 4,757
  • 2
  • 44
  • 68