15

I have a list of complex numbers for which I want to find the closest value in another list of complex numbers.

My current approach with numpy:

import numpy as np

refArray = np.random.random(16);
myArray = np.random.random(1000);


def find_nearest(array, value):
    idx = (np.abs(array-value)).argmin()
    return idx;

for value in np.nditer(myArray):
    index = find_nearest(refArray, value);
    print(index);

Unfortunately, this takes ages for a large amount of values. Is there a faster or more "pythonian" way of matching each value in myArray to the closest value in refArray?

FYI: I don't necessarily need numpy in my script.

Important: the order of both myArray as well as refArray is important and should not be changed. If sorting is to be applied, the original index should be retained in some way.

Divakar
  • 218,885
  • 19
  • 262
  • 358
Alexander
  • 333
  • 2
  • 12
  • In terms of time complexity, a *sliding window* will probably be the most efficient. – Willem Van Onsem Jul 27 '17 at 11:33
  • 2
    I cannot see a sliding window being any more efficient than the current solution. To the best of my understanding, the current solution is O(n) which is the best to hope for. Then, there are some trade offs to do, in order to minimize the time constant. But that depends on wether your big case is exploding your memory or not. If it is not a memory issue, it might be possible to gain a little from using broadcasting and use more `numpy` calculations, but that might just as well slow you down if RAM memory is an issue. – JohanL Jul 27 '17 at 11:39
  • @JohanL RAM is not an issue, time unfortunately is. This simple loop was both the simplest but also the best approach I could think of.. Unfortunately, with array sizes of ref=64 and values=200,000 the matching takes over 10 seconds, target would be well under one second... – Alexander Jul 27 '17 at 11:52
  • You could try `scipy.spatial.KDTree.query`. – hilberts_drinking_problem Jul 27 '17 at 11:55

2 Answers2

15

Here's one vectorized approach with np.searchsorted based on this post -

def closest_argmin(A, B):
    L = B.size
    sidx_B = B.argsort()
    sorted_B = B[sidx_B]
    sorted_idx = np.searchsorted(sorted_B, A)
    sorted_idx[sorted_idx==L] = L-1
    mask = (sorted_idx > 0) & \
    ((np.abs(A - sorted_B[sorted_idx-1]) < np.abs(A - sorted_B[sorted_idx])) )
    return sidx_B[sorted_idx-mask]

Brief explanation :

  • Get the sorted indices for the left positions. We do this with - np.searchsorted(arr1, arr2, side='left') or just np.searchsorted(arr1, arr2). Now, searchsorted expects sorted array as the first input, so we need some preparatory work there.

  • Compare the values at those left positions with the values at their immediate right positions (left + 1) and see which one is closest. We do this at the step that computes mask.

  • Based on whether the left ones or their immediate right ones are closest, choose the respective ones. This is done with the subtraction of indices with the mask values acting as the offsets being converted to ints.

Benchmarking

Original approach -

def org_app(myArray, refArray):
    out1 = np.empty(myArray.size, dtype=int)
    for i, value in enumerate(myArray):
        # find_nearest from posted question
        index = find_nearest(refArray, value)
        out1[i] = index
    return out1

Timings and verification -

In [188]: refArray = np.random.random(16)
     ...: myArray = np.random.random(1000)
     ...: 

In [189]: %timeit org_app(myArray, refArray)
100 loops, best of 3: 1.95 ms per loop

In [190]: %timeit closest_argmin(myArray, refArray)
10000 loops, best of 3: 36.6 µs per loop

In [191]: np.allclose(closest_argmin(myArray, refArray), org_app(myArray, refArray))
Out[191]: True

50x+ speedup for the posted sample and hopefully more for larger datasets!

Divakar
  • 218,885
  • 19
  • 262
  • 358
9

An answer that is much shorter than that of @Divakar, also using broadcasting and even slightly faster:

abs(myArray[:, None] - refArray[None, :]).argmin(axis=-1)
Make42
  • 12,236
  • 24
  • 79
  • 155