1

My code:

import math
import cmath
print "E^ln(-1)", cmath.exp(cmath.log(-1))

What it prints:

E^ln(-1) (-1+1.2246467991473532E-16j)

What it should print:

-1

(For Reference, Google checking my calculation)

According to the documentation at python.org cmath.exp(x) returns e^(x), and cmath.log(x) returns ln (x), so unless I'm missing a semicolon or something , this is a pretty straightforward three line program.

When I test cmath.log(-1) it returns πi (technically 3.141592653589793j). Which is right. Euler's identity says e^(πi) = -1, yet Python says when I raise e^(πi), I get some kind of crazy talk (specifically -1+1.2246467991473532E-16j).

Why does Python hate me, and how do I appease it?

Is there a library to include to make it do math right, or a sacrifice I have to offer to van Rossum? Is this some kind of floating point precision issue perhaps?

The big problem I'm having is that the precision is off enough to have other values appear closer to 0 than actual zero in the final function (not shown), so boolean tests are worthless (i.e. if(x==0)) and so are local minimums, etc...

For example, in an iteration below:

X = 2 Y= (-2-1.4708141202500006E-15j)
X = 3 Y= -2.449293598294706E-15j
X = 4 Y= -2.204364238465236E-15j
X = 5 Y= -2.204364238465236E-15j
X = 6 Y= (-2-6.123233995736765E-16j)
X = 7 Y= -2.449293598294706E-15j

3 & 7 are both actually equal to zero, yet they appear to have the largest imaginary parts of the bunch, and 4 and 5 don't have their real parts at all.

Sorry for the tone. Very frustrated.

Jason Nichols
  • 3,739
  • 3
  • 29
  • 49
  • 2
    It looks like a floating point precision issue. -1.4E-16 is very small and the real part is correct. – brice Jun 12 '13 at 17:59
  • @brice real part is incorrect on final function, and imaginary parts are smaller on numbers where they should exist than on numbers where they should be zero. – Jason Nichols Jun 12 '13 at 18:05
  • 3
    Please read: [What every computer scientist should know about Floating Point Arithmetic](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.6768). All you questions will be answered in gory/glorious detail! – mforbes Jun 12 '13 at 18:09
  • on a fun side note, this simple code just blue screened my laptop and fried my file in Netbeans experimental python editor. I can feel the tension evaporating today. – Jason Nichols Jun 12 '13 at 18:36

3 Answers3

4

As you've already demonstrated, cmath.log(-1) doesn't return exactly i*pi. Of course, returning pi exactly is impossible as pi is an irrational number...

Now you raise e to the power of something that isn't exactly i*pi and you expect to get exactly -1. However, if cmath returned that, you would be getting an incorrect result. (After all, exp(i*pi+epsilon) shouldn't equal -1 -- Euler doesn't make that claim!).

For what it's worth, the result is very close to what you expect -- the real part is -1 with an imaginary part close to floating point precision.

mgilson
  • 300,191
  • 65
  • 633
  • 696
  • See the comment to brice above. The issue at hand is controlling for this inaccuracy. As noted the precision issue seems to be adding or subtracting enough random noise to make legitimate zeros of the function less zero-ey (i.e. greater absolute values of real and imaginary parts) than places where it has a legitimate value. – Jason Nichols Jun 12 '13 at 18:16
  • 1
    @JasonNichols -- In your `X = ... Y = ...` stuff ... I don't know what `X` and `Y` are supposed to be representing there. Maybe if we knew better what you're talking about we could help you -- but I doubt it. Ultimately, you're not going to do better in a computer unless you use some sort of symbolic manipulation library/tool. (`maxima`, `Mathematica` or something like `sympy`). Of course, sometimes you can do some algebraic manipulation on things to get it in a form that is less sensitive to precision errors. – mgilson Jun 12 '13 at 18:21
  • trying to create a trivial test case that doesn't reveal proprietary information. The actual equation is from a paper someone intends to publish, so leaking prematurely would be bad for the whole getting paid part of my job. Problem is I'm only savvy enough to represent the functions in python, not derive new ones that behave similarly and yield the same precision issues on the results. – Jason Nichols Jun 12 '13 at 18:34
  • 1
    @mgilson Regrefully, sympy doesn't figure this special case out either. ``sympy.simplify(e**(I*pi))`` is ``2.71828182845905**(I*pi)`` and ``N(e**(I*pi))`` equals ``-1.0 + 2.0e-16*I`` – nealmcb Nov 30 '14 at 01:43
1

It appears to be a rounding issue. While -1+1.22460635382e-16j is not a correct value, 1.22460635382e-16j is pretty close to zero. I don't know how you could fix this but a quick and dirty way could be rounding the number to a certain number of digits after the dot ( 14 maybe ? ).

Anything less than 10^-15 is normally zero. Computer calculations have a certain error that is often in that range. Floating point representations are representations, not exact values.

wtf8_decode
  • 452
  • 6
  • 19
0

The problem is inherent to representing irrational numbers (like π) in finite space as floating points.

The best you can do is filter your result and set it to zero if its value is within a given range.

>>> tolerance = 1e-15
>>> def clean_complex(c):
...   real,imag = c.real, c.imag
...   if -tolerance < real < tolerance:
...     real = 0
...   if -tolerance < imag < tolerance:
...     imag = 0
...   return complex(real,imag)
... 
>>> clean_complex( cmath.exp(cmath.log(-1)) )
(-1+0j)
brice
  • 24,329
  • 7
  • 79
  • 95
  • upvote for the tolerance function, but I'm still struggling with the precision giving false values within the tolerance. – Jason Nichols Jun 12 '13 at 18:32
  • 1
    It would take infinite space to properly represent π as a floating point. Since we truncate, the result is that our final value is off the platonic ideal of π by what we truncated by, proportional to any operations in between that could increase the error. – brice Jun 12 '13 at 18:36
  • wish I could upvote twice for "Platonic ideal". :) I get the concept, but the specific is that the error from truncation is not proportional enough to the truncation. For values where the answer is arbitrarily far from zero, the error is acceptable, but for certain non trivial values, the error is greater for true zeros than false positives. – Jason Nichols Jun 12 '13 at 18:39