3

Python computes the imaginary unit i = sqrt(-1) inaccurately:

>>> (-1) ** 0.5
(6.123233995736766e-17+1j)

Should be exactly 1j (Python calls it j instead of i). Both -1 and 0.5 are represented exactly, the result can be represented exactly as well, so there's no hard reason (i.e., floating point limitations) why Python couldn't get it right. It could. And i=sqrt(-1) being the definition makes it rather disappointing that Python gets that wrong. So why does it? How does it compute the inaccurate result?

Kelly Bundy
  • 23,480
  • 7
  • 29
  • 65
  • 3
    Generally speaking, computing square roots via generic exponentiation is a Bad Idea™ from a numerical perspective, regardless of whether the operands are real or complex. The reason is that generic exponentiation uses a more complex algorithm than square rooting, which typically results in larger error bounds. The magnitude of the real component here is on the order of double-precision rounding error. – njuffa Jul 16 '22 at 00:08
  • 5
    I conjecture `(-1) ** 0.5` is computed largely as `exp(log(-1) * .5)` and log(−1) is πi, and π is not representable in the floating-point format, so it is rounded to 3.141592653589793115997963468544185161590576171875, and then `exp(3.141592653589793115997963468544185161590576171875/2*i)` is computed as 6.12323399573676603586882014729198302312846062338790031898128063403419218957424163818359375•10^−17 + i. – Eric Postpischil Jul 16 '22 at 01:51
  • @EricPostpischil Nice use of exact values. – chux - Reinstate Monica Jul 16 '22 at 04:30

1 Answers1

1

When complex arithmetic is required, your Python implementation likely calculates xy as ey ln x, as might be done with the complex C functions cexp and clog. Those are in turn likely calculated with real functions including ln, sqrt, atan2, sin, cos, and pow, but the details need not concern us.

ln −1 is πi. However, π is not representable in a floating-point format. Your Python implementation likely uses the IEEE-754 “double precision” format, also called binary64. In that format, the closest representable value to π is 3.141592653589793115997963468544185161590576171875. So ln −1 is likely calculated as 3.141592653589793115997963468544185161590576171875 i.

Then y ln x = .5 • 3.141592653589793115997963468544185161590576171875 i is 1.5707963267948965579989817342720925807952880859375 i.

e1.5707963267948965579989817342720925807952880859375 i is also not exactly representable. The true value of that is approximately 6.123233995736765886130329661375001464640•10−17 + .9999999999999999999999999999999981253003 i.

The nearest representable value to 6.123233995736765886130329661375001464640•10−17 is 6.12323399573676603586882014729198302312846062338790031898128063403419218957424163818359375•10−17, and the nearest representable value to .9999999999999999999999999999999981253003 is 1, so the calculated result is 6.12323399573676603586882014729198302312846062338790031898128063403419218957424163818359375•10−17 + 1 i.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • I tracked down the code now. I believe it goes through [float_pow](https://github.com/python/cpython/blob/74761548862eb5a324c23d86a6233d884f386f2e/Objects/floatobject.c#L737) (which does quite a few case distinctions but not exactly (-1)^0.5) and ends up computing the real part as [pow(hypot(-1, 0), 0.5) * cos(atan2(0, -1) * 0.5)](https://github.com/python/cpython/blob/74761548862eb5a324c23d86a6233d884f386f2e/Objects/complexobject.c#L144) (see [Python demo](https://tio.run/##HYpBCoAgEADvvmKPa2hoEfQdCUIPuosthK@39DCXmeEmkcp@cu39rpQhB4mQMlMViI1JDAQJZTNw0aMU11QEmV6cEa034PTPemhYxoNzR2fA@qFG0b1/)). – Kelly Bundy Jul 20 '22 at 14:48
  • So it *could* add another case distinctions to recognize this noteworthy case, but I guess it's just not worth it. Btw, how did you get those approximate true values? WolframAlpha or so? – Kelly Bundy Jul 20 '22 at 15:06
  • @KellyBundy: I used Maple 10 for the values of e^(1.57…375 i). I use Apple‘s tools on macOS to compute floating-point values where they are easily computable inside the floating-point arithmetic; Apple has done a good job of ensuring that the `printf` formatting converts numbers correctly. Where more precision is needed, I usually use an old version of Maple 10. – Eric Postpischil Jul 20 '22 at 23:53
  • Hmm, is it noteworthy that that `printf` works correctly? I *expect* it to. Is some other implementation *not* working correctly? Can you tell an example? – Kelly Bundy Jul 21 '22 at 00:11
  • @KellyBundy: The C standard only requires it to “work” up to a certain number of digits. From memory, the conversion has to produce a number that would let you figure out what the exact number is if you know the numbers in the floating-point format, but it does not have to be rounded the same as if you took that exact number and rounded it to the number of digits requested. For example, if you use `%.25g` to request 25 digits, some C implementations might produce 17 digits (enough to uniquely distinguish the number) and fill out the rest with zeros. – Eric Postpischil Jul 21 '22 at 00:18
  • Ah, ok. I rarely write C. In Python, when I want to see the exact value, I usually do something like `'%.60f' % 0.1`, and that seems to work fine. – Kelly Bundy Jul 21 '22 at 00:30
  • @KellyBundy: Python does not have a firm specification and leaves floating-point details up to individual implementations. They have been getting better recently, and your implementation may do well, but be aware it is not something you can rely on when using other Python implementations. – Eric Postpischil Jul 21 '22 at 00:59
  • Hmm, in the doc about `%`-formatting, I don't see any kind of warning, so I'd consider it a bug if it didn't show the correct value. – Kelly Bundy Jul 21 '22 at 01:12