5

I am writing a rather big simulation in Python and was hoping to get some extra performance from Cython. However, for the code below I don't seem to get all that much, even though it contains a rather large loop. Roughly 100k iterations.

Did I make some beginners mistake or is this loop-size simply to small to have a big effect? (In my tests the Cython code was only about 2 times faster).

import numpy as np;
cimport numpy as np;
import math

ctypedef np.complex64_t cpl_t
cpl = np.complex64

def example(double a, np.ndarray[cpl_t,ndim=2] A):

    cdef int N = 100

    cdef np.ndarray[cpl_t,ndim=3] B = np.zeros((3,N,N),dtype = cpl)

    cdef Py_ssize_t n, m;
    for n in range(N):
        for m in range(N):

            if np.sqrt(A[0,n]) > 1:
                B[0,n,m] = A[0,n] + 1j * A[0,m]

    return B;
physicsGuy
  • 3,437
  • 3
  • 27
  • 35
  • 7
    You're making an `np.sqrt` call inside the loop. That looks like a performance problem. Why is it in the loop, anyway? `a` never changes. Why not just do `if a <= 1: return B` before the loop? – user2357112 Jun 30 '16 at 16:56
  • @GWW: That looks like the first row to me. – user2357112 Jun 30 '16 at 17:30
  • @user2357112 Thanks, that's actually something I overlooked, I can move that out. Actually, are math operations such as np.sqrt() or np.exp() something I should avoid in cython? – physicsGuy Jul 01 '16 at 08:27
  • 1
    These call back into Python, so ye, you might want to avoid them if you want to run outside the GIL (e.g. for multithreading). – Sergei Lebedev Jul 01 '16 at 08:44
  • You're usually better using the c standard library maths operations in Cython. – DavidW Jul 01 '16 at 12:44

1 Answers1

9

You should use compiler directives. I wrote your function in Python

import numpy as np

def example_python(a, A):
    N = 100
    B = np.zeros((3,N,N),dtype = np.complex)
    aux = np.sqrt(A[0])
    for n in range(N):
        if aux[n] > 1:
            for m in range(N):
                B[0,n,m] = A[0,n] + 1j * A[0,m]
return B

and in Cython (you can learn about compiler directives here)

import cython
import numpy as np
cimport numpy as np

ctypedef np.complex64_t cpl_t
cpl = np.complex64

@cython.boundscheck(False) # compiler directive
@cython.wraparound(False) # compiler directive
def example_cython(double a, np.ndarray[cpl_t,ndim=2] A):

    cdef int N = 100
    cdef np.ndarray[cpl_t,ndim=3] B = np.zeros((3,N,N),dtype = cpl)
    cdef np.ndarray[float, ndim=1] aux
    cdef Py_ssize_t n, m
    aux = np.sqrt(A[0,:]).real
    for n in range(N):
        if aux[n] > 1.:
            for m in range(N):
                B[0,n,m] = A[0,n] + 1j * A[0,m]
    return B

I compare both functions

c = np.array(np.random.rand(100,100)+1.5+1j*np.random.rand(100,100), dtype=np.complex64)

%timeit example_python(100, c)
10 loops, best of 3: 61.8 ms per loop

%timeit example_cython(100, c)
10000 loops, best of 3: 134 µs per loop

Cython is ~450 times faster than Python in this case.

sebacastroh
  • 608
  • 4
  • 13