8

In matlab there is a special function which is not available in any of the collections for the Python I know (numpy, scipy, mpmath, ...).

Probably there are other places where functions like this one may be found?

UPD For all who think that the question is trivial, please try to compute this function for argument ~30 first.

UPD2 Arbitrary precision is a nice workaround, but if possible I would prefer to avoid it. I need a "standard" machine precision (no more no less) and maximum speed possible.

UPD3 It turns out, mpmath gives surprisingly inaccurate result. Even where standard python math works, mpmath results are worse. Which makes it absolutely worthless.

UPD4 The code to compare different ways to compute erfcx.

import numpy as np 

def int_erfcx(x):
    "Integral which gives erfcx" 
    from scipy import integrate
    def f(xi):
        return np.exp(-x*xi)*np.exp(-0.5*xi*xi)
    return 0.79788456080286535595*integrate.quad(f,
                           0.0,min(2.0,50.0/(1.0+x))+100.0,limit=500)[0] 

def my_erfcx(x):
    """M. M. Shepherd and J. G. Laframboise, 
       MATHEMATICS OF COMPUTATION 36, 249 (1981)
       Note that it is reasonable to compute it in long double 
       (or whatever python has)
    """
    ch_coef=[np.float128(0.1177578934567401754080e+01),
             np.float128(  -0.4590054580646477331e-02),
             np.float128( -0.84249133366517915584e-01),
             np.float128(  0.59209939998191890498e-01),
             np.float128( -0.26658668435305752277e-01),
             np.float128(   0.9074997670705265094e-02),
             np.float128(  -0.2413163540417608191e-02),
             np.float128(    0.490775836525808632e-03),
             np.float128(    -0.69169733025012064e-04),
             np.float128(      0.4139027986073010e-05),
             np.float128(       0.774038306619849e-06),
             np.float128(      -0.218864010492344e-06),
             np.float128(        0.10764999465671e-07),
             np.float128(         0.4521959811218e-08),
             np.float128(         -0.775440020883e-09),
             np.float128(          -0.63180883409e-10),
             np.float128(           0.28687950109e-10),
             np.float128(             0.194558685e-12),
             np.float128(            -0.965469675e-12),
             np.float128(              0.32525481e-13),
             np.float128(              0.33478119e-13),
             np.float128(              -0.1864563e-14),
             np.float128(              -0.1250795e-14),
             np.float128(                 0.74182e-16),
             np.float128(                 0.50681e-16),
             np.float128(                 -0.2237e-17),
             np.float128(                 -0.2187e-17),
             np.float128(                    0.27e-19),
             np.float128(                    0.97e-19),
             np.float128(                     0.3e-20),
             np.float128(                    -0.4e-20)]
    K=np.float128(3.75)
    y = (x-K) / (x+K)
    y2 = np.float128(2.0)*y
    (d, dd) = (ch_coef[-1], np.float128(0.0))
    for cj in ch_coef[-2:0:-1]:             
        (d, dd) = (y2 * d - dd + cj, d)
    d = y * d - dd + ch_coef[0]
    return d/(np.float128(1)+np.float128(2)*x)

def math_erfcx(x):
    import scipy.special as spec
    return spec.erfc(x) * np.exp(x*x)

def mpmath_erfcx(x):
    import mpmath
    return mpmath.exp(x**2) * mpmath.erfc(x)

if __name__ == "__main__":
    x=np.linspace(1.0,26.0,200)
    X=np.linspace(1.0,100.0,200)

    intY  = np.array([int_erfcx(xx*np.sqrt(2)) for xx in X])
    myY   = np.array([my_erfcx(xx) for xx in X])
    myy   = np.array([my_erfcx(xx) for xx in x])
    mathy = np.array([math_erfcx(xx) for xx in x])
    mpmathy = np.array([mpmath_erfcx(xx) for xx in x])
    mpmathY = np.array([mpmath_erfcx(xx) for xx in X])

    print ("Integral vs exact: %g"%max(np.abs(intY-myY)/myY))
    print ("math vs exact:     %g"%max(np.abs(mathy-myy)/myy))
    print ("mpmath vs math:    %g"%max(np.abs(mpmathy-mathy)/mathy))
    print ("mpmath vs integral:%g"%max(np.abs(mpmathY-intY)/intY))

exit()

For me, it gives

Integral vs exact: 6.81236e-16
math vs exact:     7.1137e-16
mpmath vs math:    4.90899e-14
mpmath vs integral:8.85422e-13

Obviously, math gives best possible precision where it works while mpmath gives error couple orders of magnitude larger where math works and even more for larger arguments.

John
  • 13,197
  • 7
  • 51
  • 101
Misha
  • 562
  • 6
  • 21
  • `erfcx()` is not that hard to implement, don't you think? – aayoubi Jan 22 '12 at 16:23
  • 2
    That looks absolutely trivial to implement. – Dan Jan 22 '12 at 16:26
  • 2
    @Ayoubi If you don't care about precision for large arguments, maybe. But I do. – Misha Jan 22 '12 at 16:30
  • 4
    @Dan: **wrong**. If you don't care about correctness then `erfcx = lambda x: numpy.exp(x**2)*scipy.special.erfc(x)` might work for you. But in general it is not *absolutely* trivial whenever you're dealing with floating point values. – jfs Jan 22 '12 at 16:35
  • Could you give an example where mpmath gives an inaccurate result? It is very surprising that it would be worse than the standard python math module. – Fredrik Johansson Jan 30 '12 at 15:08
  • @Fredrik Johansson See update. – Misha Feb 06 '12 at 19:21
  • You are just seeing the result of using a numerically unstable formula. The error is in computing x**2 and then feeding that to exp, when x is large. Both the value of exp and erfc computed by mpmath are fully accurate, as functions of the actual input. If you evaluate x**2 using extra precision (as mpmath easily lets you do), you get the same result as with the numerically stable algorithms. – Fredrik Johansson Feb 13 '12 at 09:47
  • It is curious that the "math" version seems more accurate though. Take for example x[-5] = 7176904682562153 * 2**(-48); here scipy.special.erfc(x) = 1.0005885021737542e-284, mpmath.erfc(x) = 1.0005885021737114e-284, which is correct except for the last digit, as you can be verified by increasing the precision, or with Mathematica. But in computing scipy.special.erfc(x) * exp(x**2), both errors seem to cancel out, leaving the final result more accurate. – Fredrik Johansson Feb 13 '12 at 10:16
  • Actually, this makes perfect sense now that I think about it: obviously scipy.erfc essentially computes erfcx(x) internally, then performs a division by an (inaccurate) approximation of exp(x**2). Then when computing erfcx, you multiply by *exactly the same* (inaccurate) approximation of exp(x**2), cancelling the errors. In summary, scipy.erfc(x) is less accurate than mpmath.erfc(x), but this way of computing erfcx is more accurate by accident (!). Computing exp(x**2) with increased precision for the squaring in mpmath gives the same result. Quite a subtle numerical issue. – Fredrik Johansson Feb 13 '12 at 10:26
  • @Fredrik Johansson Yes, it is funny. What is more funny is that these packages do not provide perfectly defined and much more accurate erfcx which is usually computed internally. – Misha Feb 13 '12 at 20:03

5 Answers5

5

Here is a simple and fast implementation giving 12-13 digit accuracy globally:

from scipy.special import exp, erfc

def erfcx(x):
    if x < 25:
        return erfc(x) * exp(x*x)
    else:
        y = 1. / x
        z = y * y
        s = y*(1.+z*(-0.5+z*(0.75+z*(-1.875+z*(6.5625-29.53125*z)))))
        return s * 0.564189583547756287
Fredrik Johansson
  • 25,490
  • 3
  • 25
  • 17
3

A highly optimized C++ implementation of erfcx (for both real and complex arguments) was recently merged into SciPy and should be in SciPy version 0.12.

  • The `scipy.special.erfcx` function is [now available in SciPy 0.12](http://docs.scipy.org/doc/scipy-dev/reference/release.0.12.0.html). – Steven G. Johnson Apr 25 '13 at 15:01
3

The gmpy2 library provides access to the MPFR multiple-precision library. For normal precision, it is almost 5x faster than mpmath.

$ py27 -m timeit -s "import mpmath" -s "def erfcx(x):return mpmath.exp(x**2) * mpmath.erfc(x)" "erfcx(30)"
10000 loops, best of 3: 47.3 usec per loop
$ py27 -m timeit -s "import gmpy2" -s "def erfcx(x):return gmpy2.exp(x**2) * gmpy2.erfc(x)" "erfcx(30)"
100000 loops, best of 3: 10.8 usec per loop

Both libraries return the same result for 30.

>>> import mpmath
>>> import gmpy2
>>> mpmath.exp(30**2) * mpmath.erfc(30)
mpf('0.018795888861416751')
>>> gmpy2.exp(30**2) * gmpy2.erfc(30)
mpfr('0.018795888861416751')
>>> 

Disclaimer: I maintain gmpy2. I'm actively working towards a new release but there shouldn't be any issues with the current release for this calculation.

Edit: I was curious about the overhead of making two functions calls instead of just one so I implemented a gmpy2.erfcx() entirely in C but still using MPFR to perform the calculations. The improvement was less than I expected. If you think erfcx() would be useful, I can add it to the next release.

$ py27 -m timeit -s "import gmpy2" "gmpy2.erfcx(30)"
100000 loops, best of 3: 9.45 usec per loop
casevh
  • 11,093
  • 1
  • 24
  • 35
  • Thank you for the answer. It would be nice to have more functions like this one available to avois numerical issues. There are other examples. E.g., all libraries have Laguerre polynomials but not Laguerre functions (which are Laguerre polynomials scaled with exponent). When I needed one, I had to write my own recursive implementation. It is a matter of few minutes to write a fast implementation when you have a fast implementation of polynomial, but nobody did it for scipy and others. Probably, it was also considered "trivial". – Misha Jan 23 '12 at 04:35
2

This function is now available in the newest Version of scipy, see http://docs.scipy.org/doc/scipy/reference/special.html.

Community
  • 1
  • 1
user1397974
  • 115
  • 5
2

I don't know that any of the standard sources include that function, but you can implement it in straightforward fashion, at least if you use mpmath and aren't too worried about performance:

import math
import mpmath

def erfcx(x):
    return math.exp(x**2) * math.erfc(x)

def erfcx_mp(x):
    return mpmath.exp(x**2) * mpmath.erfc(x)

where = mpmath.linspace(1, 50, 10) + mpmath.linspace(100, 1000, 5)
for x in where:
    try:
        std = erfcx(x)
    except OverflowError:
        std = None
    new = erfcx_mp(x)
    approx = (1/(x*mpmath.pi**0.5))
    print x, std, new, (new-approx)/approx 

produces

1.0 0.427583576156 0.427583576155807 -0.242127843858688
6.44444444444444 0.0865286153111 0.0865286153111425 -0.0116285899486798
11.8888888888889 0.0472890800456 0.0472890800455829 -0.00350053472385845
17.3333333333333 0.032495498521 0.0324954985209682 -0.00165596082986796
22.7777777777778 0.024745497 0.0247454970000106 -0.000960939188986022
28.2222222222222 None 0.0199784436993529 -0.000626572735073611
33.6666666666667 None 0.0167507236463156 -0.000440550710337029
39.1111111111111 None 0.0144205913280408 -0.000326545959369654
44.5555555555556 None 0.0126594222570918 -0.00025167403795913
50.0 None 0.0112815362653238 -0.000199880119832415
100.0 None 0.00564161378298943 -4.99925018743586e-5
325.0 None 0.00173595973189465 -4.73366058776083e-6
550.0 None 0.00102579754728657 -1.6528843659911e-6
775.0 None 0.000727985953393782 -8.32464102161289e-7
1000.0 None 0.000564189301453388 -4.9999925011689e-7

And it behaves as it should even when the math.* routines overflow. mpmath's interval support isn't quite up to the task (without some hackery I'm too lazy to do), but for this I'm pretty sure mpfs will suffice, as erfcx is simply the product of two things mpmath can compute nicely.

DSM
  • 342,061
  • 65
  • 592
  • 494
  • Yep, was just about to mention mpmath ( http://code.google.com/p/mpmath/ ) as a possible answer. For the example, with x=30 it gives me `mpf('0.018795888861416751')` which matches up with the result I get from Octave. – David H. Clements Jan 22 '12 at 17:15
  • 1
    If you have particular speed, accuracy, or domain requirements you didn't mention you should probably edit those into your question. – DSM Jan 22 '12 at 17:43
  • There was some work a while back to get `mpmath` set up with `cython` leading to some nice performance improvements. It is currently only available with `sage` ( http://www.sagemath.org/ ) , but it may be possible to pull it out or hook into it. – David H. Clements Jan 22 '12 at 17:52