643

What is the most efficient way to map a function over a numpy array? I am currently doing:

import numpy as np 

x = np.array([1, 2, 3, 4, 5])

# Obtain array of square of each element in x
squarer = lambda t: t ** 2
squares = np.array([squarer(xi) for xi in x])

However, this is probably very inefficient, since I am using a list comprehension to construct the new array as a Python list before converting it back to a numpy array. Can we do better?

Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135
Ryan
  • 7,621
  • 5
  • 18
  • 31
  • 18
    why not "squares = x**2"? Do you have a much more complicated function you need to evaluate? – 22degrees Feb 05 '16 at 03:45
  • 5
    How about only `squarer(x)`? – Life Jan 10 '18 at 16:12
  • 5
    Maybe this is not directly answering the question, but I've heard that [numba](http://numba.pydata.org) can compile existing python code into parallel machine instructions. I'll revisit and revise this post when I actually have a chance to use that. – 把友情留在无盐 Apr 30 '18 at 23:39
  • @Life `squarer(x)` will apply the `squarer` function over the elements of the array and return an array with the results of singular `squarer(element)` invocations. I'm writing this because "how about only squarer(x)?" wasn't clear enough at first glance. – JustAnEuropean Mar 30 '22 at 21:45

11 Answers11

538

I've tested all suggested methods plus np.array(list(map(f, x))) with perfplot (a small project of mine).

Message #1: If you can use numpy's native functions, do that.

If the function you're trying to vectorize already is vectorized (like the x**2 example in the original post), using that is much faster than anything else (note the log scale):

enter image description here

If you actually need vectorization, it doesn't really matter much which variant you use.

enter image description here


Code to reproduce the plots:

import numpy as np
import perfplot
import math


def f(x):
    # return math.sqrt(x)
    return np.sqrt(x)


vf = np.vectorize(f)


def array_for(x):
    return np.array([f(xi) for xi in x])


def array_map(x):
    return np.array(list(map(f, x)))


def fromiter(x):
    return np.fromiter((f(xi) for xi in x), x.dtype)


def vectorize(x):
    return np.vectorize(f)(x)


def vectorize_without_init(x):
    return vf(x)


b = perfplot.bench(
    setup=np.random.rand,
    n_range=[2 ** k for k in range(20)],
    kernels=[
        f,
        array_for,
        array_map,
        fromiter,
        vectorize,
        vectorize_without_init,
    ],
    xlabel="len(x)",
)
b.save("out1.svg")
b.show()
znb
  • 3
  • 2
Nico Schlömer
  • 53,797
  • 27
  • 201
  • 249
  • After installing perfplot (v0.3.2) via pip (``pip install -U perfplot``), I see the message: ``AttributeError: 'module' object has no attribute 'save'`` when pasting the example code. – tsherwen May 29 '18 at 14:09
  • 3
    What about a vanilla for loop? – Catiger3331 Jul 12 '18 at 13:51
  • 2
    Any significant difference in memory usage for these functions? I have code that runs fast using the direct-function approach, but for large arrays it gets out of memory (due to temporary float64 representation from numpy.sqrt). – Pieter-Jan Busschaert Jun 25 '19 at 14:27
  • Please also do not forget the existence of ``numpy.apply_along_axis()``, a built-in, clean and scalable method, as mentioned in https://stackoverflow.com/a/58614807/3218693 – Bill Huang Jun 04 '20 at 22:14
  • @BillHuang `apply_along_axis` is vectorization, just along one single axis. – Nico Schlömer Jun 05 '20 at 07:34
  • @NicoSchlömer, it can still be used on a 1-D array: just reshape x into (-1, 1), apply along axis 0, and reshape back. By doing this I can almost reproduce the result I mentioned above. – Bill Huang Jun 05 '20 at 19:20
  • map works on rows instead of values for matrices, so vectorize works better – Jacob Eggers Jul 11 '20 at 04:33
  • Many of the above will not work if ndim > 1, AFAICS?! – hans_meine Apr 09 '21 at 12:24
  • You should really add `np.fromfunction(lambda n: f(x[n]), (len(x),), dtype='int')`. For me this was much faster than vectorize when the array was long. (I almost wonder if python/numpy are vectorizing in the background ...) See also @Wunderbar's answer below. (I won't edit the answer as I'm not getting a nice plot. :) ) – Duncan MacIntyre Jun 14 '21 at 22:24
  • @hans_meine You can always `.flatten()` then `.reshape()` when it's done. – Duncan MacIntyre Jun 14 '21 at 22:37
  • Correction to my previous comment: I was incorrect to suggest `np.fromfunction`. (I misunderstood what it does, and I read Wunderbar's answer too quickly ). fromfunction actually calls the given function *once* with vector arguments; it is the same as the f(x) vector call but with indexing overhead and it would fail if f(x) could not do vector operations. It does not map as per the question. Wunderbar was actually suggesting np.frompyfunc, which was marginally faster than np.vectorize in my testing. You may want to include np.frompyfunc in your plot for comparison. See Wunderbar's answer. – Duncan MacIntyre Jun 23 '21 at 19:58
236

Use numpy.vectorize:

import numpy as np
x = np.array([1, 2, 3, 4, 5])
squarer = lambda t: t ** 2
vfunc = np.vectorize(squarer)
vfunc(x)

# Output: array([ 1,  4,  9, 16, 25])
Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135
satomacoto
  • 11,349
  • 2
  • 16
  • 13
  • 59
    This isn't any more efficient. – user2357112 Feb 05 '16 at 02:34
  • 138
    From that doc: `The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop.` In other questions I found that `vectorize` might double the user iteration speed. But the real speedup is with real `numpy` array operations. – hpaulj Feb 05 '16 at 06:09
  • 7
    Note that vectorize does at least make things work for non-1d arrays – Eric Aug 28 '17 at 00:34
  • 1
    But `squarer(x)` would already work for non-1d arrays. `vectorize` only really has any advantage over a list comprehension (like the one in the question), not over `squarer(x)`. – user2357112 Jan 10 '18 at 23:16
  • 2
    It used to be that `np.vectorize` was slower than the equivalent list comprehension. Now it scales better, so that with large arguments it is faster. It still isn't as fast as using the compiled `numpy` methods and operators without any sort of python level loop. – hpaulj Feb 27 '22 at 21:30
134

TL;DR

As noted by @user2357112, a "direct" method of applying the function is always the fastest and simplest way to map a function over Numpy arrays:

import numpy as np
x = np.array([1, 2, 3, 4, 5])
f = lambda x: x ** 2
squares = f(x)

Generally avoid np.vectorize, as it does not perform well, and has (or had) a number of issues. If you are handling other data types, you may want to investigate the other methods shown below.

Comparison of methods

Here are some simple tests to compare three methods to map a function, this example using with Python 3.6 and NumPy 1.15.4. First, the set-up functions for testing:

import timeit
import numpy as np

f = lambda x: x ** 2
vf = np.vectorize(f)

def test_array(x, n):
    t = timeit.timeit(
        'np.array([f(xi) for xi in x])',
        'from __main__ import np, x, f', number=n)
    print('array: {0:.3f}'.format(t))

def test_fromiter(x, n):
    t = timeit.timeit(
        'np.fromiter((f(xi) for xi in x), x.dtype, count=len(x))',
        'from __main__ import np, x, f', number=n)
    print('fromiter: {0:.3f}'.format(t))

def test_direct(x, n):
    t = timeit.timeit(
        'f(x)',
        'from __main__ import x, f', number=n)
    print('direct: {0:.3f}'.format(t))

def test_vectorized(x, n):
    t = timeit.timeit(
        'vf(x)',
        'from __main__ import x, vf', number=n)
    print('vectorized: {0:.3f}'.format(t))

Testing with five elements (sorted from fastest to slowest):

x = np.array([1, 2, 3, 4, 5])
n = 100000
test_direct(x, n)      # 0.265
test_fromiter(x, n)    # 0.479
test_array(x, n)       # 0.865
test_vectorized(x, n)  # 2.906

With 100s of elements:

x = np.arange(100)
n = 10000
test_direct(x, n)      # 0.030
test_array(x, n)       # 0.501
test_vectorized(x, n)  # 0.670
test_fromiter(x, n)    # 0.883

And with 1000s of array elements or more:

x = np.arange(1000)
n = 1000
test_direct(x, n)      # 0.007
test_fromiter(x, n)    # 0.479
test_array(x, n)       # 0.516
test_vectorized(x, n)  # 0.945

Different versions of Python/NumPy and compiler optimization will have different results, so do a similar test for your environment.

Mike T
  • 41,085
  • 18
  • 152
  • 203
  • 3
    If you use the `count` argument and a generator expression then `np.fromiter` is significantly faster. – juanpa.arrivillaga Mar 26 '17 at 04:43
  • 3
    So, for example, use `'np.fromiter((f(xi) for xi in x), x.dtype, count=len(x))'` – juanpa.arrivillaga Mar 26 '17 at 04:44
  • 1
    @mike-t In the line 16, column 22 of your script, you are giving a list comprehension to `fromiter` instead of an iterable object. Anyway, that does not change the result that much – SebMa Jun 29 '17 at 18:25
  • 4
    You didn't test the direct solution of `f(x)`, [which beats everything else by over an order of magnitude](https://ideone.com/ov8EkZ). – user2357112 Jan 10 '18 at 23:20
  • @user2357112 seems so obvious! I'll probably add this above in another week or two. I'd like to do some tests with it to see if there are limitations. – Mike T Jan 15 '18 at 22:47
  • 5
    What about if `f` has 2 variables and the array is 2D? – Sigur Nov 13 '18 at 23:32
  • 1
    @Sigur, I was looking into this tonight and tried these methods but they only seem to work on 1-dimensional numpy arrays? Moving to a 2-dimensional array breaks the fromiter and the map approaches. – Lucas Roberts Mar 26 '19 at 02:04
  • @LucasRoberts, well, I have no news on that direction. – Sigur Mar 26 '19 at 11:41
  • 1
    @Sigur, yes I was just stating that I see the same thing. In other words, the posted solution handles 1-d numpy arrays, which matches with the example given in the post. However, the stated title of the question is "Most efficient way to map function over numpy array" which w/o specifying would cover 2-d, and more generally n-d cases for n>=2 as well. Is helpful to clarify. – Lucas Roberts Mar 27 '19 at 18:13
  • 7
    I am confused as how the 'f(x)' version ("direct") is actually considered comparable when the OP was asking how to "map" a function across an array? In the case of f(x) = x ** 2 the ** is being performed by numpy on the entire array not on a per element basis. For example if f(x) is 'lambda x: x + x" then the answer is very different because numpy concatenates the arrays instead of doing per element addition. Is this really the intended comparison? Please explain. – Andrew Mellinger Sep 04 '19 at 12:19
  • 3
    The "direct" version is not element-wise, it's row-wise. To see this take for example `lambda x: str(x)` – Andreas K. Dec 19 '19 at 18:51
86

There are numexpr, numba and cython around, the goal of this answer is to take these possibilities into consideration.

But first let's state the obvious: no matter how you map a Python-function onto a numpy-array, it stays a Python function, that means for every evaluation:

  • numpy-array element must be converted to a Python-object (e.g. a Float).
  • all calculations are done with Python-objects, which means to have the overhead of interpreter, dynamic dispatch and immutable objects.

So which machinery is used to actually loop through the array doesn't play a big role because of the overhead mentioned above - it stays much slower than using numpy's built-in functionality.

Let's take a look at the following example:

# numpy-functionality
def f(x):
    return x+2*x*x+4*x*x*x

# python-function as ufunc
import numpy as np
vf=np.vectorize(f)
vf.__name__="vf"

np.vectorize is picked as a representative of the pure-python function class of approaches. Using perfplot (see code in the appendix of this answer) we get the following running times:

enter image description here

We can see, that the numpy-approach is 10x-100x faster than the pure python version. The decrease of performance for bigger array-sizes is probably because data no longer fits the cache.

It is worth also mentioning, that vectorize also uses a lot of memory, so often memory-usage is the bottle-neck (see related SO-question). Also note, that numpy's documentation on np.vectorize states that it is "provided primarily for convenience, not for performance".

Other tools should be used, when performance is desired, beside writing a C-extension from the scratch, there are following possibilities:


One often hears, that the numpy-performance is as good as it gets, because it is pure C under the hood. Yet there is a lot room for improvement!

The vectorized numpy-version uses a lot of additional memory and memory-accesses. Numexp-library tries to tile the numpy-arrays and thus get a better cache utilization:

# less cache misses than numpy-functionality
import numexpr as ne
def ne_f(x):
    return ne.evaluate("x+2*x*x+4*x*x*x")

Leads to the following comparison:

enter image description here

I cannot explain everything in the plot above: we can see bigger overhead for numexpr-library at the beginning, but because it utilize the cache better it is about 10 time faster for bigger arrays!


Another approach is to jit-compile the function and thus getting a real pure-C UFunc. This is numba's approach:

# runtime generated C-function as ufunc
import numba as nb
@nb.vectorize(target="cpu")
def nb_vf(x):
    return x+2*x*x+4*x*x*x

It is 10 times faster than the original numpy-approach:

enter image description here


However, the task is embarrassingly parallelizable, thus we also could use prange in order to calculate the loop in parallel:

@nb.njit(parallel=True)
def nb_par_jitf(x):
    y=np.empty(x.shape)
    for i in nb.prange(len(x)):
        y[i]=x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y

As expected, the parallel function is slower for smaller inputs, but faster (almost factor 2) for larger sizes:

enter image description here


While numba specializes on optimizing operations with numpy-arrays, Cython is a more general tool. It is more complicated to extract the same performance as with numba - often it is down to llvm (numba) vs local compiler (gcc/MSVC):

%%cython -c=/openmp -a
import numpy as np
import cython

#single core:
@cython.boundscheck(False) 
@cython.wraparound(False) 
def cy_f(double[::1] x):
    y_out=np.empty(len(x))
    cdef Py_ssize_t i
    cdef double[::1] y=y_out
    for i in range(len(x)):
        y[i] = x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y_out

#parallel:
from cython.parallel import prange
@cython.boundscheck(False) 
@cython.wraparound(False)  
def cy_par_f(double[::1] x):
    y_out=np.empty(len(x))
    cdef double[::1] y=y_out
    cdef Py_ssize_t i
    cdef Py_ssize_t n = len(x)
    for i in prange(n, nogil=True):
        y[i] = x[i]+2*x[i]*x[i]+4*x[i]*x[i]*x[i]
    return y_out

Cython results in somewhat slower functions:

enter image description here


Conclusion

Obviously, testing only for one function doesn't prove anything. Also one should keep in mind, that for the choosen function-example, the bandwidth of the memory was the bottle neck for sizes larger than 10^5 elements - thus we had the same performance for numba, numexpr and cython in this region.

In the end, the ultimative answer depends on the type of function, hardware, Python-distribution and other factors. For example Anaconda-distribution uses Intel's VML for numpy's functions and thus outperforms numba (unless it uses SVML, see this SO-post) easily for transcendental functions like exp, sin, cos and similar - see e.g. the following SO-post.

Yet from this investigation and from my experience so far, I would state, that numba seems to be the easiest tool with best performance as long as no transcendental functions are involved.


Plotting running times with perfplot-package:

import perfplot
perfplot.show(
    setup=lambda n: np.random.rand(n),
    n_range=[2**k for k in range(0,24)],
    kernels=[
        f, 
        vf,
        ne_f, 
        nb_vf, nb_par_jitf,
        cy_f, cy_par_f,
        ],
    logx=True,
    logy=True,
    xlabel='len(x)'
    )
Community
  • 1
  • 1
ead
  • 32,758
  • 6
  • 90
  • 153
  • 1
    Numba can make use of Intel SVML usually which results in quite compareable timings compared to Intel VML, but the implementation is a bit buggy in version (0.43-0.47). I have added a performance plot https://stackoverflow.com/a/56939240/4045774 for comparsion to your cy_expsum. – max9111 Jan 03 '20 at 18:09
  • Best answer here if you want the *best* performance. – Abhishek Divekar Apr 10 '22 at 07:19
  • "Cython results in somewhat slower functions" in my experience, that depends on the compiler flags. I was getting slower code until I enabled AVX2 optimization in the compiler, which outperformed Numba for my particular application. – Zaero Divide Jun 28 '22 at 21:08
39
squares = squarer(x)

Arithmetic operations on arrays are automatically applied elementwise, with efficient C-level loops that avoid all the interpreter overhead that would apply to a Python-level loop or comprehension.

Most of the functions you'd want to apply to a NumPy array elementwise will just work, though some may need changes. For example, if doesn't work elementwise. You'd want to convert those to use constructs like numpy.where:

def using_if(x):
    if x < 5:
        return x
    else:
        return x**2

becomes

def using_where(x):
    return numpy.where(x < 5, x, x**2)
user2357112
  • 260,549
  • 28
  • 431
  • 505
28

It seems that no one has mentioned a built-in factory method of producing ufunc in numpy package: np.frompyfunc, which I have tested against np.vectorize, and have outperformed it by about 20~30%. Of course it will not perform as well prescribed C code or even numba(which I have not tested), but it can a better alternative than np.vectorize

f = lambda x, y: x * y
f_arr = np.frompyfunc(f, 2, 1)
vf = np.vectorize(f)
arr = np.linspace(0, 1, 10000)

%timeit f_arr(arr, arr) # 307ms
%timeit vf(arr, arr) # 450ms

I have also tested larger samples, and the improvement is proportional. See the documentation also here

Marcus Vinicius Pompeu
  • 1,219
  • 1
  • 11
  • 24
Wunderbar
  • 547
  • 5
  • 11
  • 1
    I repeated the above timing tests, and also found a performance improvement (over np.vectorize) of about 30% – Julian - BrainAnnex.org Apr 22 '20 at 04:24
  • A caveat: it seems like this method constructs arrays with dtype=object. With that said, it was still marginally faster than vectorize for me even when I added a conversion to dtype=float. – Duncan MacIntyre Jun 23 '21 at 20:02
19

Edit: the original answer was misleading, np.sqrt was applied directly to the array, just with a small overhead.

In multidimensional cases where you want to apply a builtin function that operates on a 1d array, numpy.apply_along_axis is a good choice, also for more complex function compositions from numpy and scipy.

Previous misleading statement:

Adding the method:

def along_axis(x):
    return np.apply_along_axis(f, 0, x)

to the perfplot code gives performance results close to np.sqrt.

LyteFM
  • 895
  • 8
  • 12
  • I am extremely shocked about the fact that most people does not seem to be aware of this simple, scalable and built-in no-brainer for so many years.... – Bill Huang Jun 04 '20 at 22:06
  • 8
    This is misleading. You are not actually vectorizing `f` this way. For example, try replacing `np.sqrt` with `math.sqrt` in Nico's perf code and you'll get an error. What is actually happening here is that `f` is called with an array argument, because x is single dimensional and you are telling it to apply it along first axis, which contains all the elements. To make this answer valid, the argument to `apply_along_axis` should be replaced with `x[None,:]`. Then you'll find that along_axis is the slowest among them all. – syockit Aug 31 '20 at 05:30
  • You're right - I came across the question when searching for a way to apply 1d-functions to higher dimensional arrays and tried out whether it would also work here - without realising that it simply applies `np.sqrt` directly. – LyteFM Sep 03 '20 at 13:41
9

I believe in newer version( I use 1.13) of numpy you can simply call the function by passing the numpy array to the fuction that you wrote for scalar type, it will automatically apply the function call to each element over the numpy array and return you another numpy array

>>> import numpy as np
>>> squarer = lambda t: t ** 2
>>> x = np.array([1, 2, 3, 4, 5])
>>> squarer(x)
array([ 1,  4,  9, 16, 25])
Peiti Li
  • 4,634
  • 9
  • 40
  • 57
  • 5
    This isn't remotely new - it has always been the case - it's one of the core features of numpy. – Eric Aug 28 '17 at 00:36
  • 10
    It's the `**` operator that's applying the calculation to each element t of `t`. That's ordinary numpy. Wrapping it in the `lambda` doesn't do anything extra. – hpaulj Jul 29 '18 at 17:46
  • This doesn't work with if statements as it currently is shown. – TriHard8 Feb 07 '20 at 08:45
4

As mentioned in this post, just use generator expressions like so:

numpy.fromiter((<some_func>(x) for x in <something>),<dtype>,<size of something>)
Community
  • 1
  • 1
bannana
  • 65
  • 3
4

All above answers compares well, but if you need to use custom function for mapping, and you have numpy.ndarray, and you need to retain the shape of array.

I have compare just two, but it will retain the shape of ndarray. I have used the array with 1 million entries for comparison. Here I use square function, which is also inbuilt in numpy and has great performance boost, since there as was need of something, you can use function of your choice.

import numpy, time
def timeit():
    y = numpy.arange(1000000)
    now = time.time()
    numpy.array([x * x for x in y.reshape(-1)]).reshape(y.shape)        
    print(time.time() - now)
    now = time.time()
    numpy.fromiter((x * x for x in y.reshape(-1)), y.dtype).reshape(y.shape)
    print(time.time() - now)
    now = time.time()
    numpy.square(y)  
    print(time.time() - now)

Output

>>> timeit()
1.162431240081787    # list comprehension and then building numpy array
1.0775556564331055   # from numpy.fromiter
0.002948284149169922 # using inbuilt function

here you can clearly see numpy.fromiter works great considering to simple approach, and if inbuilt function is available please use that.

Rushikesh
  • 129
  • 1
  • 4
-1

Use numpy.fromfunction(function, shape, **kwargs)

See "https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html"

Eric Cox
  • 19
  • 2