12

In JavaScript, Math.cbrt(1728) evaluate to the exact result of 12.

However, the seemingly-equivalent expression Math.pow(1728, 1/3) evaluates to 11.999999999999998.

Why do these results vary in precision?

njuffa
  • 23,970
  • 4
  • 78
  • 130
shadowcursor
  • 1,174
  • 1
  • 13
  • 19
  • Cool question - I didn't realize that `Math.cbrt()` performed like that. – Jeremy Nov 15 '14 at 22:29
  • 6
    This question may be a duplicate, but it does not seem to be a duplicate of the linked questions. 1.0/3.0 is not exactly representable as a floating-point number meaning the input to `pow()` has error. Exponentiation has the well-known property of magnifying error in the input (that is, the error in the output is larger than the error in the input). Assuming both `cbrt()` and `pow()` are implemented with similar maximum error (typically the case for high-quality math libraries) the computation `pow(x,1.0/3.0)` is thus less accurate than `cbrt(x)`. The latter will typically also be faster. – njuffa Nov 15 '14 at 23:30
  • 1
    In fact, due to the issue I pointed out in my previous comments, a double-precision math library that correctly rounds both `cbrt()` and `pow()` will return exactly result stated in the question: `cbrt(1728) = 1.2000000000000000e+001 0x1.8000000000000p+3; pow(1728,1.0/3.0) = 1.1999999999999998e+001 0x1.7ffffffffffffp+3` – njuffa Nov 16 '14 at 00:52
  • I get `0x1.7ffffffffffff1172efp+3` for `pow(1728, 1.0/3.0)`, which rounds as njuffa described. Neato. – tmyklebu Nov 16 '14 at 03:16
  • @njuffa It seem as `Math.cbrt` similarly implemented to `Math.pow(x,1/3)` according to this source: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/cbrt#Polyfill. Maybe this is different between each browser? – LyingOnTheSky Nov 16 '14 at 12:39
  • `1/3` is an irrational number; hence the output of *any* calculation with this number will be irrational... I don't see how you can expect anything else? – Martin Tournoij Nov 16 '14 at 12:46
  • 2
    @Carpetsmoker `an irrational number is any real number that cannot be expressed as a ratio of integers` neither of the numbers are irrational. – LyingOnTheSky Nov 16 '14 at 12:50
  • @LyingOnTheSky Right; I stand corrected. I thought that any infinite number was *also* irrational; in any case, it is infinite. – Martin Tournoij Nov 16 '14 at 12:53
  • @Carpetsmoker This is fraction; infinite number is hyperreal number. (Which can't really represent like this.) – LyingOnTheSky Nov 16 '14 at 12:57
  • @LyingOnTheSky Yes, but that's not JS sees things? `1/3` evaluates to `0.33333...`, AFAIK there is no way to represent a fraction like this in JS without using external libraries. – Martin Tournoij Nov 16 '14 at 13:00
  • @Carpetsmoker Yes but according to this [link](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/cbrt#Polyfill), `Math.cbrt` is similarly implemented to `Math.pow(x,1/3)`. – LyingOnTheSky Nov 16 '14 at 13:01
  • 1
    @LyingOnTheSky: that link only refers to how to add something similar to `Math.cbrt` if it is not available from the implementation. The native version uses a different function which should be more accurate that simply doing `Math.pow(x, 1/3)`. – Qantas 94 Heavy Nov 16 '14 at 13:36
  • @Qantas94Heavy You are right, I am not web programmer; I didn't knew what polyfill is. – LyingOnTheSky Nov 16 '14 at 18:14
  • 1
    Related question: http://stackoverflow.com/questions/18063755/computing-a-correctly-rounded-an-almost-correctly-rounded-floating-point-cubic – Pascal Cuoq Nov 17 '14 at 00:15

1 Answers1

19

A couple of general remarks up front:

  1. As explained in this seminal paper, due to finite precision and range limits, floating-point arithmetic is sufficiently different from real mathematics (for example, lack of associativity) that mathematically equivalent expressions are not necessarily equivalent when evaluated in floating-point arithmetic.

  2. Standards for computer languages do not typically guarantee any particular accuracy for math functions, or identical error bounds between different math functions such as cbrt() or pow(). But math libraries that deliver correctly rounded results for a given precision do exist, such as CRlibm.

In this case however, cbrt(x) will deliver more accurate results than pow(x,1.0/3.0) even when both functions are correctly rounded for all inputs.

The issue is that 1.0/3.0 cannot be represented exactly as a floating-point number, whether in binary or decimal. The IEEE-754 double precision number closest to one third is 3.3333333333333331e-1 (or 0x1.5555555555555p-2 when expressed in the C/C++ hexadecimal floating-point format). The relative representational error is -5.5511151231257827e-17 (-0x1.0000000000000p-54), meaning the best double-precision representation of 1/3 is somewhat smaller than the desired mathematical value.

This initial error in one of the inputs of pow() is not only passed through to the output, it is magnified due to the error magnification property of exponentiation. As a result, pow(x,1.0/3.0) will generally deliver results that are too small compared to the desired cube root, even if pow() delivers correctly rounded results. For the example in the question, the correctly rounded results are

cbrt(1728.0)        = 1.2000000000000000e+1  (0x1.8000000000000p+3)
pow(1728.0,1.0/3.0) = 1.1999999999999998e+1  (0x1.7ffffffffffffp+3)

that is, the result from pow() is one ulp smaller than the result from cbrt(). For arguments large in magnitude, the difference will be much larger. For example, if x is 21022, the respective results differ by 94 ulps:

x              = 4.4942328371557898e+307  (0x1.0000000000000p+1022)
cbrt(x)        = 3.5553731598732904e+102  (0x1.965fea53d6e3dp+340)
pow(x,1.0/3.0) = 3.5553731598732436e+102  (0x1.965fea53d6ddfp+340)

The relative error in the result of pow() in this example is 1.3108e-14, demonstrating the magnification of the relative error mentioned above.

For reasons of both accuracy and performance, math libraries that implement cbrt() therefore typically do not map cbrt(x) to pow(x,1.0/3.0) but use alternative computational schemes. While implementations will differ, a commonly used approach is to start with an initial low-precision approximation followed by one or several steps of Halley's method which has cubic convergence.

As a rule of thumb, when a computer language offers both a dedicated cube root functionality and general exponentiation functionality, the former should be preferred to the latter for the computation of cube roots.

njuffa
  • 23,970
  • 4
  • 78
  • 130
  • 1
    How did you write this within 5 minutes?! Do you have one of those brain-to-keyboard interfaces? – lxg Nov 16 '14 at 14:48
  • 1
    @lxg: I didn't. I have been applying minor corrections for the past five minutes. But it took me 20+ minutes to write. – njuffa Nov 16 '14 at 14:51
  • 1
    Ah, my fault … I misinterpreted the "answered xx minutes ago". Need more coffee. Anyway, ~20 minutes is still impressive. ;) – lxg Nov 16 '14 at 14:53
  • 2
    @lxg: My professional background includes the design of microprocessor floating-point units and mathematical libraries, plus I already knew what I wanted to write when I requested to have the question re-opened. – njuffa Nov 16 '14 at 14:56