26

I made a few experiment and found a number of cases where python's standard random and math library is faster than numpy counterpart.

I think there is a tendency that python's standard library is about 10x faster for small scale operation, while numpy is much faster for large scale (vector) operations. My guess is that numpy has some overhead which becomes dominant for small cases.

My question is: Is my intuition correct? And will it be in general advisable to use the standard library rather than numpy for small (typically scalar) operations?

Examples are below.

import math
import random
import numpy as np

Log and exponential

%timeit math.log(10)
# 158 ns ± 6.16 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

%timeit np.log(10)
# 1.64 µs ± 93.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

%timeit math.exp(3)
# 146 ns ± 8.57 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

%timeit np.exp(3)
# 1.72 µs ± 78.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Generate normal distribution

%timeit random.gauss(0, 1)
# 809 ns ± 12.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

%timeit np.random.normal()
# 2.57 µs ± 14.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Choosing a random element

%timeit random.choices([1,2,3], k=1)
# 1.56 µs ± 55.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

%timeit np.random.choice([1,2,3], size=1)
# 23.1 µs ± 1.04 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Same with numpy array

arr = np.array([1,2,3])

%timeit random.choices(arr, k=1)
# 1.72 µs ± 33.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

%timeit np.random.choice(arr, size=1)
# 18.4 µs ± 502 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

With big array

arr = np.arange(10000)

%timeit random.choices(arr, k=1000)
# 401 µs ± 6.16 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit np.random.choice(arr, size=1000)
# 41.7 µs ± 1.39 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Kota Mori
  • 6,510
  • 1
  • 21
  • 25
  • 10
    Looks like you answered your own question... – Julien Oct 02 '18 at 07:15
  • 1
    @Julien superficially, but maybe a numpy expert can chime in and connect some dots. – timgeb Oct 02 '18 at 07:17
  • @Julien Well, I wanted to know if my experiment can be generalizable. – Kota Mori Oct 02 '18 at 07:18
  • 1
    I'm no expert but yes there is some overhead using numpy: native to numpy object (c++) conversion / extra safety checks / rules and so on... if you have small arrays it is usually not faster to use numpy, where exactly is the tipping point very much depends on what exactly you are doing. Timing testing like you did is typically the only way to figure out whether you should use numpy or stay native (if performance is your only concern, numpy functionalities can be a bonus irrespective of performance) ... – Julien Oct 02 '18 at 07:22
  • Anyhow, I had that observation on my side and I wanted some confirmation. So thank you for the question ! – Marine Galantin Apr 26 '20 at 15:14

3 Answers3

16

numpy is only really a performance improvement for large blocks of data. The overhead of making sure the memory blocks line up correctly before pouring an ndarray into a c-compiled numpy function will generally overwhelm any time benefit if the array isn't relatively large. This is why so many numpy questions are basically "How do I take this loopy code and make it fast," and why it is considered a valid question in this tag where nearly any other tag will toss you to Code review before they get past the title.

So, yes, your observation is generalizable. Vectorizing is the whole point of numpy. numpy code that isn't vectorized is always slower than bare python code, and is arguably just as "wrong" as cracking a single walnut with a jackhammer. Either find the right tool or get more nuts.

Daniel F
  • 13,620
  • 2
  • 29
  • 55
5

NumPy is used primarily for performance with arrays. This relies on the use of contiguous memory blocks and more efficient lower-level iteration. Applying a NumPy mathematical function on a scalar or calculating a random number are not vectorisable operations. This explains the behaviour you are seeing.

See also What are the advantages of NumPy over regular Python lists?

And will it be in general advisable to use the standard library rather than NumPy for small (typically scalar) operations?

It's rare that the bottleneck for a program is caused by operations on scalars. In practice, the differences are negligible. So either way is fine. If you are already using NumPy there's no harm in continuing to use NumPy operations on scalars.

It's worth making a special case of calculating random numbers. As you might expect, the random number selected via random vs NumPy may not be the same:

assert random.gauss(0, 1) == np.random.normal()  # AssertionError
assert random.choices(arr, k=1)[0] == np.random.choice(arr, size=1)[0]  # AssertionError

You have additional functionality in NumPy to make random numbers "predictable". For example, running the below script repeatedly will only ever generate the same result:

np.random.seed(0)
np.random.normal()

The same applies to np.random.choice. So there are differences in how the random number is derived and the functionality available. For testing, or other, purposes you may wish to be able to produce consistent "random" numbers.

jpp
  • 159,742
  • 34
  • 281
  • 339
  • Thank you. I agree such situations where the bottleneck is critical is rare. I happened to experience one because I needed to write a tree-based algorithm where I make a small calculation at each node while traversing down the tree, where vectorization is not straightforward or even not possible. When I switched from numpy to python standard, I saw the entire algorithm runs a few times faster. I also note that there is also `random.seed` in the standard library. I don't think this is special for numpy. – Kota Mori Oct 08 '18 at 02:42
  • 1
    @KotaMori, You should consider `numba`. If you can put your loop inside a JIT-compiled function, you may see large performance improvements. – jpp Oct 08 '18 at 08:24
1

If we compute time of execution for given n to create a pascal triangle in python with two different implementations one with python for loop and one with numpy array addition then plotting time required for input n = 2^i the output will be like this

source : https://algorithmdotcpp.blogspot.com/2022/01/prove-numpy-is-faster-than-normal-list.html

MB-F
  • 22,770
  • 4
  • 61
  • 116