I am currently trying to follow a simple example for parallelizing a loop with cython's prange. I have installed OpenBlas 0.2.14 with openmp allowed and compiled numpy 1.10.1 and scipy 0.16 from source against openblas. To test the performance of the libraries I am following this example: http://nealhughes.net/parallelcomp2/. The functions to be timed are copied form the site:
import numpy as np
from math import exp
from libc.math cimport exp as c_exp
from cython.parallel import prange,parallel
def array_f(X):
Y = np.zeros(X.shape)
index = X > 0.5
Y[index] = np.exp(X[index])
return Y
def c_array_f(double[:] X):
cdef int N = X.shape[0]
cdef double[:] Y = np.zeros(N)
cdef int i
for i in range(N):
if X[i] > 0.5:
Y[i] = c_exp(X[i])
else:
Y[i] = 0
return Y
def c_array_f_multi(double[:] X):
cdef int N = X.shape[0]
cdef double[:] Y = np.zeros(N)
cdef int i
with nogil, parallel():
for i in prange(N):
if X[i] > 0.5:
Y[i] = c_exp(X[i])
else:
Y[i] = 0
return Y
The author of the code reports following speed ups for 4 cores:
from thread_demo import *
import numpy as np
X = -1 + 2*np.random.rand(10000000)
%timeit array_f(X)
1 loops, best of 3: 222 ms per loop
%timeit c_array_f(X)
10 loops, best of 3: 87.5 ms per loop
%timeit c_array_f_multi(X)
10 loops, best of 3: 22.4 ms per loop
When I run these example on my machines ( macbook pro with osx 10.10 ), I get the following timings for export OMP_NUM_THREADS=1
In [1]: from bla import *
In [2]: import numpy as np
In [3]: X = -1 + 2*np.random.rand(10000000)
In [4]: %timeit c_array_f(X)
10 loops, best of 3: 89.7 ms per loop
In [5]: %timeit c_array_f_multi(X)
1 loops, best of 3: 343 ms per loop
and for OMP_NUM_THREADS=4
In [1]: from bla import *
In [2]: import numpy as np
In [3]: X = -1 + 2*np.random.rand(10000000)
In [4]: %timeit c_array_f(X)
10 loops, best of 3: 89.5 ms per loop
In [5]: %timeit c_array_f_multi(X)
10 loops, best of 3: 119 ms per loop
I see this same behavior on an openSuse machine, hence my question. How can the author get a 4x speed up while the same code runs slower for 4 threads on 2 of my systems.
The setup script for generating the *.c & .so
is also identical to the one used in the blog.
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np
ext_modules=[
Extension("bla",
["bla.pyx"],
libraries=["m"],
extra_compile_args = ["-O3", "-ffast-math","-march=native", "-fopenmp" ],
extra_link_args=['-fopenmp'],
include_dirs = [np.get_include()]
)
]
setup(
name = "bla",
cmdclass = {"build_ext": build_ext},
ext_modules = ext_modules
)
Would be great if someone could explain to me why this happens.