9

I have a collection of n dimensional points and I want to find which 2 are the closest. The best I could come up for 2 dimensions is:

from numpy import *
myArr = array( [[1, 2],
                [3, 4],
                [5, 6],
                [7, 8]] )

n = myArr.shape[0]
cross = [[sum( ( myArr[i] - myArr[j] ) ** 2 ), i, j]
         for i in xrange( n )
         for j in xrange( n )
         if i != j
         ]

print min( cross )

which gives

[8, 0, 1]

But this is too slow for large arrays. What kind of optimisation can I apply to it?

RELATED:


Euclidean distance between points in two different Numpy arrays, not within

Community
  • 1
  • 1
Ηλίας
  • 2,560
  • 4
  • 30
  • 44
  • @Ηλίας: Roughly how many points do you have? Please note that it's possible to have a set more than 2 points (even all the points) with the same distances (but inaccurate computations may not reflect this, so eventually you need to be able to set a threshold trh where distance differences below trh are considered equal). You are not interested to find out closest point to a given one? – eat Feb 25 '11 at 20:31
  • @eat It is a hierarchy cluster that I am building, and I need to find the two closest centroids. Normally less than a thousand points, but I need to see how much it can scale. Rounding errors, won't be that important in my case. – Ηλίας Feb 25 '11 at 20:57

7 Answers7

11

Try scipy.spatial.distance.pdist(myArr). This will give you a condensed distance matrix. You can use argmin on it and find the index of the smallest value. This can be converted into the pair information.

tkerwin
  • 9,559
  • 1
  • 31
  • 47
  • What is the easiest way to get those coordinates from that single integer? – Ηλίας Feb 25 '11 at 18:58
  • @Ηλίας You can use `np.unravel_index(np.argmin(distances), distances.shape)` if *distances* contains the result of the *pdist* call above. – sffc Jun 19 '15 at 00:53
  • It gives me a stomachache to use this method for finding closest pairs in O(N^2) time, because the divide-and-conquer O(N log N) solution was literally the first algorithm I learned in my algorithms class in school. But this is just so much easier to implement and it works fine for a small enough set. – sffc Jun 19 '15 at 00:56
9

There's a whole Wikipedia page on just this problem, see: http://en.wikipedia.org/wiki/Closest_pair_of_points

Executive summary: you can achieve O(n log n) with a recursive divide and conquer algorithm (outlined on the Wiki page, above).

payne
  • 13,833
  • 5
  • 42
  • 49
  • 2
    Neat! I'm glad I hit refresh before writing: "Obviously the complexity is O(n^2)" ;o) – das_weezul Feb 25 '11 at 16:27
  • Great. If the points are to be added successively, and the minimum distance pair is to be updated, then maintaining a Delaunay triangulation structure is efficient. – Alexandre C. Feb 25 '11 at 16:28
6

You could take advantage of the latest version of SciPy's (v0.9) Delaunay triangulation tools. You can be sure that the closest two points will be an edge of a simplex in the triangulation, which is a much smaller subset of pairs than doing every combination.

Here's the code (updated for general N-D):

import numpy
from scipy import spatial

def closest_pts(pts):
    # set up the triangluataion
    # let Delaunay do the heavy lifting
    mesh = spatial.Delaunay(pts)

    # TODO: eliminate reduncant edges (numpy.unique?)
    edges = numpy.vstack((mesh.vertices[:,:dim], mesh.vertices[:,-dim:]))

    # the rest is easy
    x = mesh.points[edges[:,0]]
    y = mesh.points[edges[:,1]]

    dists = numpy.sum((x-y)**2, 1)
    idx = numpy.argmin(dists)

    return edges[idx]
    #print 'distance: ', dists[idx]
    #print 'coords:\n', pts[closest_verts]

dim = 3
N = 1000*dim
pts = numpy.random.random(N).reshape(N/dim, dim)

Seems closely O(n):

enter image description here

Paul
  • 42,322
  • 15
  • 106
  • 123
  • May actually work in 2D. Have you made any timings? However this approach fails miserable in higher dim. Thanks – eat Feb 25 '11 at 21:31
  • @eat: why do you say it "fails miserably"? 3D is 4-5X slower than the same N in 2D. But any approach (except for the naive brute approach) is going to see slowdowns with D. – Paul Feb 25 '11 at 21:56
  • Well, it's kind of pointless to try to do Delaunay triangulation in 123D! So this won't solve OP's question (unless his nD is 2 or 3). Don't get me wrong, I'm actually very happy that `scipy` is able to perform Delaunay triangulation so fast. Please make some timings with `pdist` for n= 2...123, you'll see. Thanks – eat Feb 25 '11 at 22:16
  • @eat: I missed the fact that the OP wanted a general N-D solution, I was under the impression it was strictly 2D. I'm a little "bridge-and-tunnel" and sometimes consider 3D not only as "high dimensional", but the highest! – Paul Feb 26 '11 at 00:59
2

There is a scipy function pdist that will get you the pairwise distances between points in an array in a fairly efficient manner:

http://docs.scipy.org/doc/scipy/reference/spatial.distance.html

that outputs the N*(N-1)/2 unique pairs (since r_ij == r_ji). You can then search on the minimum value and avoid the whole loop mess in your code.

JoshAdel
  • 66,734
  • 27
  • 141
  • 140
1

Perhaps you could proceed along these lines:

In []: from scipy.spatial.distance import pdist as pd, squareform as sf
In []: m= 1234
In []: n= 123
In []: p= randn(m, n)
In []: d= sf(pd(p))
In []: a= arange(m)
In []: d[a, a]= d.max()
In []: where(d< d.min()+ 1e-9)
Out[]: (array([701, 730]), array([730, 701]))

With substantially more points you need to be able to somehow utilize the hierarchical structure of your clustering.

eat
  • 7,440
  • 1
  • 19
  • 27
0

The accepted answer is OK for small datasets, but its execution time scales as n**2. However, as pointed out by @payne, an optimal solution can achieve n*log(n) computation time scaling.

This optial solution can be obtained using sklearn.neighbors.BallTree as follows.

import matplotlib.pyplot as plt
import numpy as np
from sklearn.neighbors import BallTree as tree

n = 10
dim = 2
xy = np.random.uniform(size=[n, dim])

# This solution is optimal when xy is very large
res = tree(xy)
dist, ids = res.query(xy, 2)
mindist = dist[:, 1]  # second nearest neighbour
minid = np.argmin(mindist)

plt.plot(*xy.T, 'o')
plt.plot(*xy[ids[minid]].T, '-o')

This procedure scales well for very large sets of xy values and even for large dimensions dim (altough the example illustrates the case dim=2). The resulting output looks like this

The nearest pair of points is connected by an orange line

An identical solution can be obtained using scipy.spatial.cKDTree, by replacing the sklearn import with the following Scipy one. Note however that cKDTree, unlike BallTree, does not scale well for high dimensions

from scipy.spatial import cKDTree as tree
divenex
  • 15,176
  • 9
  • 55
  • 55
0

How fast is it compared to just doing a nested loop and keeping track of the shortest pair? I think creating a huge cross array is what might be hurting you. Even O(n^2) is still pretty quick if you're only doing 2 dimensional points.

Anthony -GISCOE-
  • 618
  • 5
  • 12