17

A single/double/extended-precision floating-point representation of Pi is accurate up to how many decimal places?

phuclv
  • 37,963
  • 15
  • 156
  • 475

10 Answers10

26
#include <stdio.h>

#define E_PI 3.1415926535897932384626433832795028841971693993751058209749445923078164062

int main(int argc, char** argv)
{
    long double pild = E_PI;
    double pid = pild;
    float pif = pid;
    printf("%s\n%1.80f\n%1.80f\n%1.80Lf\n",
    "3.14159265358979323846264338327950288419716939937510582097494459230781640628620899",
    pif, pid, pild);
    return 0;
}

Results:

[quassnoi #] gcc --version
gcc (GCC) 4.3.2 20081105 (Red Hat 4.3.2-7)

[quassnoi #] ./test

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899

3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
        ^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
                 ^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
                 ^
  0000000001111111
  1234567890123456
phuclv
  • 37,963
  • 15
  • 156
  • 475
Quassnoi
  • 413,100
  • 91
  • 616
  • 614
  • 1
    interesting test... unfortunately, I bet it's all sorts of system dependent :P – rmeador Feb 03 '09 at 16:41
  • Actually I say dependent on the math.h library. – Jim C Feb 03 '09 at 16:43
  • 2
    Sure, that's why I put gcc --version there – Quassnoi Feb 03 '09 at 16:43
  • I used math.h only for M_PI constant, I think it should be same in every version, it's a PI, after all :) Anyway, I updated the code not to use math.h – Quassnoi Feb 03 '09 at 16:44
  • 2
    This test is invalid for the extended precision result, because your #define literal for pi is in double precision. You need it to be an extended precision literal. See [this](http://stackoverflow.com/questions/21557816/whats-the-c-suffix-for-long-double-literals). – Madcowswe Feb 26 '15 at 11:10
  • 2
    the `E_PI` must have `L` suffix to get long double precision, otherwise it'll stuck at double precision – phuclv Mar 25 '17 at 01:29
18

When I examined Quassnoi's answer it seemed suspicious to me that long double and double would end up with the same accuracy so I dug in a little. If I ran his code compiled with clang I got the same results as him. However I found out that if I specified the long double suffix and used a literal to initialize the long double it provided more precision. Here is my version of his code:

#include <stdio.h>

int main(int argc, char** argv)
{
    long double pild = 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899L;
    double pid = pild;
    float pif = pid;
    printf("%s\n%1.80f\n%1.80f\n%1.80Lf\n",
        "3.14159265358979323846264338327950288419716939937510582097494459230781640628620899",
        pif, pid, pild);
    return 0;
}

And the results:

3.14159265358979323846264338327950288419716939937510582097494459230781640628620899

3.14159274101257324218750000000000000000000000000000000000000000000000000000000000
        ^
3.14159265358979311599796346854418516159057617187500000000000000000000000000000000
                 ^
3.14159265358979323851280895940618620443274267017841339111328125000000000000000000
                    ^
phuclv
  • 37,963
  • 15
  • 156
  • 475
thephred
  • 338
  • 3
  • 7
  • This appears to be compiler and architecture dependent however: http://en.wikipedia.org/wiki/Long_double – thephred Mar 28 '14 at 18:50
4

6 places and 14 places.1 place is over 0 for the 3, and the last place although stored can't be considered as a precision point.

And sorry but I don't know what extended means without more context. Do you mean C#'s decimal?

Robert Gould
  • 68,773
  • 61
  • 187
  • 272
  • Please see "An Informal Description of IEEE754" http://www.cse.ttu.edu.tw/~jmchen/NM/refs/story754.pdf –  Feb 03 '09 at 17:52
  • @Hrushikesh The link is dead :( But I have found a [working link](http://140.129.20.249/~jmchen/NM/refs/story754.pdf). – fredoverflow Jun 06 '12 at 13:40
1

Accuracy of a floating-point type is not related to PI or any specific numbers. It only depends on how many digits are stored in memory for that specific type.

In case of IEEE-754 float uses 23 bits of mantissa so it can be accurate to 23+1 bits of precision, or ~7 digits of precision in decimal. Regardless of π, e, 1.1, 9.87e9... all of them is stored with exactly 24 bits in a float. Similarly double (53 bits of mantissa) can store 15~17 decimal digits of precision.

phuclv
  • 37,963
  • 15
  • 156
  • 475
  • Your logic / conclusion is actually incorrect. It **is related** to the specific value; the binary representation of floating-points have a fixed number of bits for mantissa, but depending on the exponent, some of those bits will be used on representing the integer portion, or the decimals portion. An example that helps visualize this: you store pi in a `double` and it will be accurate up to the 15th decimal (at least for the gcc that comes with Ubuntu 18, running on an intel core i5 --- I believe it's mapped to IEEE-754). You store 1000*pi, and it will be accurate up to the 12th decimal. – Cal-linux Apr 20 '19 at 15:40
  • @Cal-linux you're mistaking the precision of a type vs the **error after doing operations**. If you do `1000*pi` and got a slightly less accurate result, that doesn't mean the precision was reduced. You got it wrong because you don't understand what "significand" is, which isn't counted after the radix point. In fact 1000*pi lose only 1 digit of precision and is still [correct to the 15th digit of significand, **not 12**](https://ideone.com/SxXZZc). You're also confusing between ['precision' and 'accuracy'?](https://stackoverflow.com/q/8270789/995714) – phuclv Apr 20 '19 at 16:02
  • and if you have the exact 1000pi constant instead of doing it through the multiplication during runtime you'll still get exactly 53 bits of precision – phuclv Apr 20 '19 at 16:03
  • 1
    you're still getting it wrong. It is a well-known aspect of floating points, that the accuracy/error in the representation is unevenly distributed across the range; you can distinguish between 0.1 and 0.1000001, but not between 10^50 and (0.0000001 + 10^50). FP stores a value as _x_ times 2^_y_, where _x_ uses a given number of bits to represent a value between 1 and 2 (or was it between 0 and 1?? I forget now), and _y_ has a range given by the number of bits assigned to it. If _y_ is large, the accuracy of _x_ is mostly consumed by the integer part. – Cal-linux Apr 20 '19 at 23:21
  • As for the exact 1000pi as a constant --- you may get the same 53 bits of precision, but that's not what the thread is about: you get the same 16 correct decimal digits at the beginning; but now three out of those 16 are used for the integer part, 3141 --- the decimal places are correct up to the 89793, exactly as with pi; except that in pi, that 3 in 89793 is the 15th decimal, whereas in 1000pi, it is the 12th decimal! – Cal-linux Apr 20 '19 at 23:24
  • @Cal-linux I'm well aware that the error and the distance between consecutive values scale according to the exponent, but it's irrelevant here. And the OP didn't ask about the decimal numbers after 1000pi – phuclv Apr 21 '19 at 00:32
  • _"And the OP didn't ask about the decimal numbers after 1000pi"_ -- no, but it is directly relevant; the OP asked how many decimal places of pi are correctly represented by a FP. You argued that the actual value has no relevance --- which is incorrect: for larger values, you get smaller amount of decimal places that are correctly represented. 1000pi is just an example to illustrate this; I'm still focusing, as the OP requested, on the number of _decimal places_, which what your argument gets wrong. – Cal-linux Apr 21 '19 at 18:09
  • For the fraction part of a floating point it is mostly incorrect to use the term decimal digits. It is correct sometimes, such as for 0.25 that is exactly representable in base 10 (as we are all familiar with) and in base 2 (2^-2). 0.1 is exact in base 10 but (because it can't be exactly represented) it will be an approximation in base 2 i e in the fraction part of an iEEE-754 floating-point number. 1/3 is an example of a number that cannot be exactly represented in either base. – Olof Forshell May 18 '19 at 18:36
1

Print and count, baby, print and count. (Or read the specs.)

Bombe
  • 81,643
  • 20
  • 123
  • 127
1

In the x86 floating-point unit (x87) there are instructions for loading certain floating point constants. "fldz" and "fld1" load 0.0 and 1.0 onto the stack top "st" (aka "st(0)") for example. Another is "fldpi".

All these values have a mantissa that's 64 bits long which translates into close to 20 decimal digits. The 64 bits are possible through the 80-bit tempreal floating point format used internally in the x87. The x87 can load tempreals from and store them to 10 byte memory locations as well.

Olof Forshell
  • 3,169
  • 22
  • 28
0

* EDIT: see this post for up to date discussion: Implementation of sinpi() and cospi() using standard C math library *

The new math.h functions __sinpi() and __cospi() fixed the problem for me for right angles like 90 degrees and such.

cos(M_PI * -90.0 / 180.0) returns 0.00000000000000006123233995736766
__cospi( -90.0 / 180.0 )      returns 0.0, as it should

/*  __sinpi(x) returns the sine of pi times x; __cospi(x) and __tanpi(x) return
the cosine and tangent, respectively.  These functions can produce a more
accurate answer than expressions of the form sin(M_PI * x) because they
avoid any loss of precision that results from rounding the result of the
multiplication M_PI * x.  They may also be significantly more efficient in
some cases because the argument reduction for these functions is easier
to compute.  Consult the man pages for edge case details.                 */
extern float __cospif(float) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern double __cospi(double) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern float __sinpif(float) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern double __sinpi(double) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern float __tanpif(float) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
extern double __tanpi(double) __OSX_AVAILABLE_STARTING(__MAC_10_9, __IPHONE_NA);
Keith Knauber
  • 752
  • 6
  • 13
  • 1
    `__sinpi()` and `__cospi()` are definitely not standard functions. It's easy to see as they have the `__` prefix. Searching for them mostly returns result for macOS and iOS. This question said that it's been added by Apple [Implementation of sinpi() and cospi() using standard C math library](https://stackoverflow.com/q/42792939/995714), and the [man page](https://www.unix.com/man-page/osx/3/__sinpi/) also says that it's in OSX – phuclv Apr 18 '19 at 16:35
0

Since there are sieve equations for binary representations of pi, one could combine variables to store pieces of the value to increase precision. The only limitation to the precision on this method is conversion from binary to decimal, but even rational numbers can run into issues with that.

0

World of PI have PI to 100,000,000,000 digits, you could just print and compare. For a slightly easier to read version Joy of PI have 10,000 digits. And if you want to remember the digits youself you could try lerning the Cadaeic Cadenza poem.

Martin Brown
  • 24,692
  • 14
  • 77
  • 122
0

For C code, look at the definitions in <float.h>. That covers float (FLT_*), double (DBL_*) and long double (LDBL_*) definitions.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278