1

I am trying to write a fast algorithm to compute the log gamma function. Currently my implementation seems naive, and just iterates 10 million times to compute the log of the gamma function (I am also using numba to optimise the code).

import numpy as np
from numba import njit
EULER_MAS = 0.577215664901532 # euler mascheroni constant
HARMONC_10MIL = 16.695311365860007 # sum of 1/k from 1 to 10,000,000

@njit(fastmath=True)
def gammaln(z):
"""Compute log of gamma function for some real positive float z"""
    out = -EULER_MAS*z - np.log(z) + z*HARMONC_10MIL
    n = 10000000 # number of iters
    for k in range(1,n+1,4):
        # loop unrolling
        v1 = np.log(1 + z/k)
        v2 = np.log(1 + z/(k+1))
        v3 = np.log(1 + z/(k+2))
        v4 = np.log(1 + z/(k+3))
        out -= v1 + v2 + v3 + v4

    return out

I timed my code against the scipy.special.gammaln implementation and mine is literally 100,000's times slower. So I am doing something very wrong or very naive (probably both). Although my answers are at least correct to within 4 decimal places at worst when compared to scipy.

I tried to read the _ufunc code implementing scipy's gammaln function, however I don't understand the cython code that the _gammaln function is written in.

Is there a faster and more optimised way I can calculate the log gamma function? How can I understand scipy's implementation so I can incorporate it with mine?

PyRsquared
  • 6,970
  • 11
  • 50
  • 86
  • What is an example input for `z`? I don't know the formula but that doesn't mean people can't have a go at vectorizing this - we need to know how to call the function to test, though. – roganjosh Feb 24 '19 at 10:37
  • 1
    Also, if we're talking about 100,000s times slower than Scipy, please make sure it doesn't take us an age to run it with the example input :) – roganjosh Feb 24 '19 at 10:38
  • .@roganjosh Running the function with the argument `1` took about 50ms on my machine, so I guess this would be safe to go – user8408080 Feb 24 '19 at 10:42
  • @user8408080 oki doki. Is the input supposed to be an int or an array do you know? – roganjosh Feb 24 '19 at 10:46
  • As far as I know it can be any complex number (see [here](http://mathworld.wolfram.com/LogGammaFunction.html)). But only a single number – user8408080 Feb 24 '19 at 10:47
  • @user8408080 ok, thanks. Something was off with your timing though, because I get `%timeit gammaln(1) 23.2 s ± 1.79 s per loop (mean ± std. dev. of 7 runs, 1 loop each)`. Dropping `n` is a simple enough fix for that, though, for testing. – roganjosh Feb 24 '19 at 10:57
  • @roganjosh I tried again and still got only about 50ms. What numpy/numba version do you use? I'm only working on an i5-3470 – user8408080 Feb 24 '19 at 11:02
  • Out of interest: how come you don't want to use the function provided by `scipy`? Posted an answer below that should help. – Till Hoffmann Feb 24 '19 at 11:03
  • @user8408080 the timings were without `numba` but I went back and tried again with numba and it's still taking ages. How are you timing this? I have a feeling you are only capturing the `njit` wrapper and function definition, and not actually the processing time.`numpy 1.14.5`, `numba 0.38.0`. – roganjosh Feb 24 '19 at 11:05
  • @roganjosh I used the `%timeit` magic from IPython like this: `%timeit gammaln(1.5)`. Is this bad practice? – user8408080 Feb 24 '19 at 11:08
  • @user8408080 no, that's exactly what I'm doing! What's going on here?! You kept `n = 10000000`? I mean, I'm doing this on a laptop, but this discrepancy is crazy. – roganjosh Feb 24 '19 at 11:10
  • The gammaln implementation used by scipy is written in C. https://github.com/scipy/scipy/blob/master/scipy/special/cephes/gamma.c (lgam). The name of the backend implemnetation can be found here https://github.com/scipy/scipy/blob/master/scipy/special/functions.json – max9111 Feb 24 '19 at 20:57

3 Answers3

3

The runtime of your function will scale linearly (up to some constant overhead) with the number of iterations. So getting the number of iterations down is key to speeding up the algorithm. Whilst computing the HARMONIC_10MIL beforehand is a smart idea, it actually leads to worse accuracy when you truncate the series; computing only part of the series turns out to give higher accuracy.

The code below is a modified version of the code posted above (although using cython instead of numba).

from libc.math cimport log, log1p
cimport cython
cdef:
    float EULER_MAS = 0.577215664901532 # euler mascheroni constant

@cython.cdivision(True)
def gammaln(float z, int n=1000):
    """Compute log of gamma function for some real positive float z"""
    cdef:
        float out = -EULER_MAS*z - log(z)
        int k
        float t
    for k in range(1, n):
        t = z / k
        out += t - log1p(t)

    return out

It is able to obtain a close approximation even after 100 approximations as shown in the figure below.

enter image description here

At 100 iterations, its runtime is of the same order of magnitude as scipy.special.gammaln:

%timeit special.gammaln(5)
# 932 ns ± 19 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit gammaln(5, 100)
# 1.25 µs ± 20.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

The remaining question is of course how many iterations to use. The function log1p(t) can be expanded as a Taylor series for small t (which is relevant in the limit of large k). In particular,

log1p(t) = t - t ** 2 / 2 + ...

such that, for large k, the argument of the sum becomes

t - log1p(t) = t ** 2 / 2 + ...

Consequently, the argument of the sum is zero up to second order in t which is negligible if t is sufficiently small. In other words, the number of iterations should be at least as large as z, preferably at least an order of magnitude larger.

However, I'd stick with scipy's well-tested implementation if at all possible.

Till Hoffmann
  • 9,479
  • 6
  • 46
  • 64
  • great answer, it really does seem to work fast in your example! one stupid question, how do I get the libc.math library? I've already pip installed Cython, but can't seem to find the libc.math library. – PyRsquared Feb 24 '19 at 15:21
  • 1
    `libc.math` should be included by default, I think. However, I regularly make the mistake of writing `import` rather than `cimport` for cython includes. Is that maybe the problem? – Till Hoffmann Feb 24 '19 at 15:27
  • It doesn't seem to work with any permutation of `import` or `cimport` in the code you have above... I can run `import cython` but not `from libc.math cimport ...` (this results in syntax error) or `from libc.math import ...` (this results in ModuleNotFoundError) – PyRsquared Feb 24 '19 at 15:37
  • my mistake @Till Hoffmann, I was running this in a jupyter notebook without the proper setup. Got it working. Thanks so much! – PyRsquared Feb 24 '19 at 19:22
0

I managed to get a performance increase of roughly 3x by trying the parallel mode of numba and using mostly vectorized functions (sadly, numba can't understand numpy.substract.reduce)

from functools import reduce
import numpy as np
from numba import njit

@njit(fastmath=True, parallel=True)
def gammaln_vec(z):
    out = -EULER_MAS*z - np.log(z) + z*HARMONC_10MIL
    n = 10000000

    v = np.log(1 + z/np.arange(1, n+1))

    return out-reduce(lambda x1, x2: x1-x2, v, 0)

Times:

#Your function:
%timeit gammaln(1.5)
48.6 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

#My function:
%timeit gammaln_vec(1.5)
15 ms ± 340 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

#scpiy's function
%timeit gammaln_sp(1.5)
1.07 µs ± 18.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

So still, you will be much better off by using scipy's function. Without C code I don't know how to break it down further

user8408080
  • 2,428
  • 1
  • 10
  • 19
0

Regarding your previous questions I guess an example on wrapping the scipy.special functions to Numba is also useful.

Example

Wrapping Cython cdef functions is quite easy and portable as long as there are only simple datatypes involved (int, double, double*,...). For the documentation on how to call the scipy.special functions have a look at this. The function names you actually need to wrap the function are in scipy.special.cython_special.__pyx_capi__. Function names, which can be called with different datatyps are mangled, but determing the right one is quite easy (just look at the datatypes)

#slightly modified version of https://github.com/numba/numba/issues/3086
from numba.extending import get_cython_function_address
from numba import vectorize, njit
import ctypes
import numpy as np

_PTR = ctypes.POINTER
_dble = ctypes.c_double
_ptr_dble = _PTR(_dble)

addr = get_cython_function_address("scipy.special.cython_special", "gammaln")
functype = ctypes.CFUNCTYPE(_dble, _dble)
gammaln_float64 = functype(addr)

@njit
def numba_gammaln(x):
  return gammaln_float64(x)

Usage within Numba

#Numba example with loops
import numba as nb
import numpy as np
@nb.njit()
def Test_func(A):
  out=np.empty(A.shape[0])
  for i in range(A.shape[0]):
    out[i]=numba_gammaln(A[i])
  return out

Timings

data=np.random.rand(1_000_000)
Test_func(A): 39.1ms
gammaln(A):   39.1ms

Of course you can easily parallelize this function and outperform the single-threaded gammaln implementation in scipy and you can call this function efficiently within any Numba compiled function.

max9111
  • 6,272
  • 1
  • 16
  • 33