7

I have a numpy array as follows

array([[ 6,  5],
   [ 6,  9],
   [ 7,  5],
   [ 7,  9],
   [ 8, 10],
   [ 9, 10],
   [ 9, 11],
   [10, 10]])

I want to pick elements such that y coordinates are unique. If two y coordinates are same I want to pick element with lesser x coordinate.

Expected output

array([[ 6,  5],
   [ 6,  9],
   [ 8, 10],
   [ 9, 11]])

Explanation

pick [6,5] over [7,5]

pick [8,10] over [9,10] and [10,10]

pick [9, 11]

Thanks

user009122
  • 125
  • 2
  • 9

3 Answers3

8

First, sort by the first column:

a = a[a[:, 0].argsort()]

Returning unique indices using np.unique with the return_index flag:

a[np.unique(a[:, 1], return_index=True)[1]]

array([[ 6,  5],
       [ 6,  9],
       [ 8, 10],
       [ 9, 11]])

Some timings:

a = np.random.randint(1, 10, 10000).reshape(-1, 2)

In [45]: %timeit rows_by_unique_y(a)
3.83 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [46]: %timeit argsort_unique(a)
370 µs ± 8.26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Yes, my approach uses an initial sort, but vectorized operations in numpy beat iteration in Python.

user3483203
  • 50,081
  • 9
  • 65
  • 94
  • Sorting is asymptotically suboptimal for this problem. O(n log n) vs O(n) – Alex Reinking Jul 08 '18 at 23:06
  • 1
    Why don't you time both of our approaches, looping when using numpy is bad – user3483203 Jul 08 '18 at 23:07
  • 2
    +1, cheers. Yours indeed has a much lower constant! I often underestimate just how slow Python interpreters are. – Alex Reinking Jul 08 '18 at 23:19
  • @AlexReinking yea it's a counter-intuitive thing when working with numpy. In answers like [this](https://stackoverflow.com/questions/50959180/python-sort-list-of-lists-numerically/50960136#50960136), I actually sort something twice when dropping duplicates and it still easily beats any vanilla python approach. – user3483203 Jul 08 '18 at 23:21
  • 1
    Playing with a few input array sizes, I can actually see the linear vs n-log-n scaling, but the crossing point is around 25 million rows – Alex Reinking Jul 08 '18 at 23:31
2

If you are open to using another library, I would suggest using numpy_indexed for an efficient and compact solution

import numpy as np
import numpy_indexed as npi

a = np.array([[6, 5], [6, 9], [7, 5], [7, 9], [8, 10], [9, 10], [9, 11], [10, 10]])

column_to_groupby = 1
groups, reduced = npi.group_by(a[:,column_to_groupby]).min(a)
print(reduced)

It gives the following output

[[ 6  5]
 [ 6  9]
 [ 8 10]
 [ 9 11]]

Here is the timeit result

In [5]: a = np.random.randint(1, 10, 10000).reshape(-1, 2)

In [6]: %timeit npi.group_by(a[:,1]).min(a)
354 µs ± 2.29 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
lakshayg
  • 2,053
  • 2
  • 20
  • 34
1

One approach loop through the array and make a note of the best values you've seen, then reconstruct the array at the end:

import numpy as np

def rows_by_unique_y(arr):
  best_for_y = defaultdict(lambda: float('inf'))
  for i, row in enumerate(arr):
    x,y = row[0], row[1]
    best_for_y[y] = min(x, best_for_y[y])
  return np.array([[x,y] for y, x in best_for_y.items()])

arr = np.array([[6,  5], [6,  9], [7,  5], [7,  9], [8, 10], [9, 10], [9, 11], [10, 10]])
print(rows_by_unique_y(arr))

No need to sort, just keep track of the minimums. This outputs:

[[ 6  5]
 [ 6  9]
 [ 8 10]
 [ 9 11]]

While this answer is asymptotically faster, user3483203's answer is much better in practice. This is because it calls out to optimized C code rather than staying inside of Python's surprisingly slow interpreter. However, if your arrays are huge (several gigabytes) then the O(n log n) behavior will start to lose to this.

At the same time, if your arrays are that large, you should probably be using a MapReduce framework like Spark instead. The algorithm I gave above is easily parallelized.


If you don't need the minimum x values, then the following one-liner using np.unique works:

arr[np.unique(arr[:,1], return_index=True)[1]]

but this returns

array([[ 6,  5],
       [ 6,  9],
       [10, 10],
       [ 9, 11]])

if you switch the 8 and the 10.

Alex Reinking
  • 16,724
  • 5
  • 52
  • 86
  • Thank you Alex. I was trying to avoid looping. Tried using something np.unique(d[:,1]) and play around with it – user009122 Jul 08 '18 at 23:01
  • 1
    It's going to be difficult to square the behavior of `np.unique` with your need to keep the minimum `x` value. If you find that you need to sort, I would thoroughly test that solution against this one, since sorting will make the algorithm O(n log n) while my code runs in amortized (for the hash table) O(n). – Alex Reinking Jul 08 '18 at 23:02
  • @user009122 - I added how you would use `np.unique` if you didn't have the x-value constraint – Alex Reinking Jul 08 '18 at 23:05
  • You are right about the complexity. My use case I was looking for a concise code and I am not too bothered about complexity – user009122 Jul 08 '18 at 23:09