5

The following

from scipy.special import gamma

gamma(x)

overflows for large x. This is why scipy provides gammaln, which is equivalent to np.log(gamma(x)) and allows us to work in log space and avoid overflow.

Is there anything like this for scipy's exp1 function? I'd like to have something that returns the same as below but without underflowing for large x:

import numpy as np
from scipy.special import exp1

def exp1ln(x):
    return np.log(exp1(x))

(My reasoning for thinking this would be similar to gammaln is because exp1 is in the same family of functions, see here: Incomplete Gamma function in scipy .)

Thanks

Warren Weckesser
  • 110,654
  • 19
  • 194
  • 214
samiller
  • 63
  • 6

2 Answers2

3

For real x you can use the series expansion of log(Ei(x)) for x → ∞ (see here) which is highly accurate for large x.

From some quick experimentation, 18 terms is enough for full float precision when x >= 50 (and that's around where scipy starts losing full precision). Also the series expansion is really nice, the coefficients being the factorial numbers, so we can use precise evaluation without catastrophic cancellation:

def _exp1ln_taylor(x, k):
    r = 0; s = -1
    for i in range(k, 0, -1):
        r = (r + s) * i / x
        s = -s
    return r*s

def exp1ln(x):
    x = np.array(x)
    use_approx = x >= 50

    # Avoid infinity/underflows from outside domain.
    ax = np.where(use_approx, x, 100)
    sx = np.where(use_approx, 1, x)
    approx = -x + -np.log(x) + np.log1p(_exp1ln_taylor(ax, 18))
    sp = np.log(scipy.special.exp1(sx))
    return np.where(use_approx, approx, sp) * 1

Comparison to WolframAlpha (which can provide hundreds of digits):

x           exp1ln(x)           WolframAlpha
0.1         0.6004417824948862  0.6004417824948862
1          -1.5169319590020440 -1.5169319590020456
10         -12.390724371937408 -12.390724371937408
100        -104.61502435050535 -104.61502435050535
1000       -1006.9087537832978 -1006.9087537832978
1000000000 -1000000020.7232659 -1000000020.7232658
orlp
  • 112,504
  • 36
  • 218
  • 315
  • This is beautiful, thanks! I'd tried the convergent series expansion and then gave up when it wasn't even close after a few terms (like it says in the article you linked), never even saw the divergent one – samiller Mar 03 '21 at 13:22
1

For anyone who wants a version of exp1ln that deals only with single numbers instead of arrays, I adapted the solution from orlp to get this:

import numpy as np
from scipy.special import exp1

def _exp1ln_taylor(x, k):
    r = 0
    s = -1
    for i in range(k, 0, -1):
        r = (r + s) * i / x
        s *= -1
    return r * s

def exp1ln(x):
    if x <= 50:
        return np.log(exp1(x))
    else:
        return -x - np.log(x) + np.log1p(_exp1ln_taylor(x, 18))
samiller
  • 63
  • 6