1

When I run the following code, the output is accurately the number 2500 in decimal.

(g++ 5.3.1 on ubuntu)

#include<iostream>
#include<cmath>

using namespace std;

int main(){
    cout.precision(0);
    cout << fixed << pow(2.0,500.0);
    return 0;
}

I wonder how C++ converted this floating point number to its decimal string at such a high precision.

I know that 2500 can be accurately presented in IEEE 754 format. But I think mod 10 and divided by 10 can cause precision loss on floating point numbers. What algorithm is used when the conversion proceed?

zzh1996
  • 480
  • 4
  • 13
  • Carefully, I suggest you look at the open source implementations. This blog may also give you some idea of the sort of problems: http://www.exploringbinary.com/ – Richard Critten Jul 18 '16 at 12:09
  • Possible duplicate of [Algorithm to convert an IEEE 754 double to a string?](https://stackoverflow.com/questions/7153979/algorithm-to-convert-an-ieee-754-double-to-a-string) – phuclv May 25 '17 at 15:51
  • https://code.woboq.org/userspace/glibc/stdio-common/printf_fp.c.html#418 – zzh1996 Aug 27 '21 at 06:12

1 Answers1

3

Yes, there exists an exact double-precision floating-point representation for 2500. You should not assume that pow(2.0,500.0) produces this value, though. There is no guarantee of accuracy for the function pow, and you may find SO questions that arose from pow(10.0, 2.0) not producing 100.0, although the mathematical result was perfectly representable too.

But anyway, to answer your question, the conversion from the floating-point binary representation to decimal does not in general rely on floating-point operations, which indeed would be too inaccurate for the intended accuracy of the end result. In general, accurate conversion requires reliance on big integer arithmetics. In the case of 2500, for instance, the naïve algorithm would be to repeatedly divide the big integer written in binary 1000…<500 zeroes in total>… by ten.

There are some cases where floating-point arithmetic can be used, for instance taking advantage of the fact that powers of 10 up to 1023 are represented exactly in IEEE 754 double-precision. But correctly rounded conversion between binary floating-point and decimal floating-point always require big integer arithmetics in general, and this is particularly visible far away from 1.0.

Community
  • 1
  • 1
Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
  • Or BigDecimal (which often use BigInteger internally). – Rudy Velthuis Jul 18 '16 at 12:27
  • I assume that pow() internally use exponentiation based on `e` and natural logarithms, so that this is accurate could well be pure coincidence, indeed. Or perhaps some pow() implementations are different for integer arguments? – Rudy Velthuis Jul 18 '16 at 12:29
  • @RudyVelthuis Regardless of how it is implemented, it only needs to be designed to be “faithful” (error < 1ULP) to guarantee that `pow(2.0,500.0)` will come out as 0x1.0p500, since this is the only double-precision number at a distance of 1 ULP of 2^500. – Pascal Cuoq Jul 18 '16 at 12:35
  • I'd guess that there are quite a few compiler runtimes where this doesn't apply, i.e. where you'll see differences of more than 1 ulp. – Rudy Velthuis Jul 18 '16 at 15:12
  • @RudyVelthuis yes that was the painful lesson of http://stackoverflow.com/questions/18417788/pow-cast-to-integer-unexpected-result – Pascal Cuoq Jul 18 '16 at 15:13
  • Ok, that was a very extreme example. – Rudy Velthuis Jul 18 '16 at 15:14