18

I need a way to sample without replacement a certain array a. I tried two approaches (see MCVE below), using random.sample() and np.random.choice.

I assumed the numpy function would be faster, but it turns out it is not. In my tests random.sample is ~15% faster than np.random.choice.

Is this correct, or am I doing something wrong in my example below? If this is correct, why?

import numpy as np
import random
import time
from contextlib import contextmanager


@contextmanager
def timeblock(label):
    start = time.clock()
    try:
        yield
    finally:
        end = time.clock()
        print ('{} elapsed: {}'.format(label, end - start))


def f1(a, n_sample):
    return random.sample(range(len(a)), n_sample)


def f2(a, n_sample):
    return np.random.choice(len(a), n_sample, replace=False)


# Generate random array
a = np.random.uniform(1., 100., 10000)
# Number of samples' indexes to randomly take from a
n_sample = 100
# Number of times to repeat functions f1 and f2
N = 100000

with timeblock("random.sample"):
    for _ in range(N):
        f1(a, n_sample)

with timeblock("np.random.choice"):
    for _ in range(N):
        f2(a, n_sample)
Gabriel
  • 40,504
  • 73
  • 230
  • 404
  • 5
    https://github.com/numpy/numpy/issues/2764 – ayhan Dec 01 '16 at 15:50
  • I see, it's a long standing issue. Could you make an answer out of your comment please @ayhan, so I can mark it as accepted? – Gabriel Dec 01 '16 at 15:52
  • 3
    I think the problem is that `np.random.choice` does random sampling without replacement by generating a permutation of *all* of the indices in your array, then taking the first `n_sample` of these (see [this line](https://github.com/numpy/numpy/blob/8c49b92d9b472f1b476b360951b1ac9066f69b4b/numpy/random/mtrand/mtrand.pyx#L1174)). This becomes quite inefficient if `n_sample` is much smaller than the number of elements in your array `a`. – ali_m Dec 01 '16 at 20:41
  • 3
    On the other hand, `random.sample` only ever draws `n_samples` random samples. It does this in one of two ways - either by keeping track of the items it has selected already (if `n_samples << N`) or by maintaining a shrinking pool of candidate items that can be selected (if `n_samples` is relatively large compared to `N`). You can see the source code [here](https://hg.python.org/cpython/file/tip/Lib/random.py#l337) - it's quite simple and readable. – ali_m Dec 01 '16 at 20:52

1 Answers1

17

TL;DR Since numpy v1.17.0 it's recommended to use numpy.random.default_rng() object instead of numpy.random. For choice:

import numpy as np

rng = np.random.default_rng()    # you can pass seed
rng.choice(...)    # interface is the same

Besides other changes with random API introduced in v1.17 this new version of choice is much smarter now and should be the fastest in most cases. The old version remains the same for backward compatibility!


As mentioned in the comments, there was a long-standing issue in numpy regarding np.random.choice implementation being ineffective for k << n compared to random.sample from python standard library.

The problem was np.random.choice(arr, size=k, replace=False) being implemented as a permutation(arr)[:k]. In case of a large array and a small k, computing the whole array permutation is a waste of time and memory. The standard python's random.sample works in a more straightforward way - it just iteratively samples without replacement either keeping track of what is already sampled or from what to sample.

In v1.17.0 numpy introduced rework and improvement of numpy.random package (docs, what's new, performance). I highly recommend to take a look at the first link at least. Note that, as it's said there, for backward compatibility the old numpy.random API remains the same - it keeps using an old implementations.

So the new recommended way to use random API is to use numpy.random.default_rng() object instead of numpy.random. Note that it's an object and it accepts optional seed argument as well so you can pass it around in a convenient way. It also uses a different generator by default that is faster in average (see performance link above for the details).

Regarding your case you may want to use np.random.default_rng().choice(...) now. Besides being faster, thanks to the improved random generator, the choice itself became smarter. Now it uses the whole array permutation only for both sufficiently large array (>10000 elements) and relatively large k (>1/50 of the size). Otherwise it uses Floyd's sampling algorithm (short description, numpy implementation).


Here's the performance comparison on my laptop:

100 samples from array of 10000 elements x 10000 times:

random.sample elapsed: 0.8711776689742692
np.random.choice elapsed: 1.9704092079773545
np.random.default_rng().choice elapsed: 0.818919860990718

1000 samples from array of 10000 elements x 10000 times:

random.sample elapsed: 8.785315042012371
np.random.choice elapsed: 1.9777243090211414
np.random.default_rng().choice elapsed: 1.05490942299366

10000 samples from array of 10000 elements x 10000 times:

random.sample elapsed: 80.15063399000792
np.random.choice elapsed: 2.0218082449864596
np.random.default_rng().choice elapsed: 2.8596064270241186

And the code I used:

import numpy as np
import random
from timeit import default_timer as timer
from contextlib import contextmanager


@contextmanager
def timeblock(label):
    start = timer()
    try:
        yield
    finally:
        end = timer()
        print ('{} elapsed: {}'.format(label, end - start))


def f1(a, n_sample):
    return random.sample(range(len(a)), n_sample)


def f2(a, n_sample):
    return np.random.choice(len(a), n_sample, replace=False)


def f3(a, n_sample):
    return np.random.default_rng().choice(len(a), n_sample, replace=False)


# Generate random array
a = np.random.uniform(1., 100., 10000)
# Number of samples' indexes to randomly take from a
n_sample = 100
# Number of times to repeat tested functions
N = 100000

print(f'{N} times {n_sample} samples')
with timeblock("random.sample"):
    for _ in range(N):
        f1(a, n_sample)

with timeblock("np.random.choice"):
    for _ in range(N):
        f2(a, n_sample)

with timeblock("np.random.default_rng().choice"):
    for _ in range(N):
        f3(a, n_sample)
pkuderov
  • 3,501
  • 2
  • 28
  • 46