67
['b','b','b','a','a','c','c']

numpy.unique gives

['a','b','c']

How can I get the original order preserved

['b','a','c']

Great answers. Bonus question. Why do none of these methods work with this dataset? http://www.uploadmb.com/dw.php?id=1364341573 Here's the question numpy sort wierd behavior

Community
  • 1
  • 1
siamii
  • 23,374
  • 28
  • 93
  • 143

7 Answers7

113

unique() is slow, O(Nlog(N)), but you can do this by following code:

import numpy as np
a = np.array(['b','a','b','b','d','a','a','c','c'])
_, idx = np.unique(a, return_index=True)
print(a[np.sort(idx)])

output:

['b' 'a' 'd' 'c']

Pandas.unique() is much faster for big array O(N):

import pandas as pd

a = np.random.randint(0, 1000, 10000)
%timeit np.unique(a)
%timeit pd.unique(a)

1000 loops, best of 3: 644 us per loop
10000 loops, best of 3: 144 us per loop
Guillaume Jacquenot
  • 11,217
  • 6
  • 43
  • 49
HYRY
  • 94,853
  • 25
  • 187
  • 187
  • The `O(N)` complexity is not mentioned anywhere and is thus only an implementation detail. The documentation simply states that it is *significantly faster than `numpy.unique`*, but this may simply mean that it has smaller constants or the complexity might be between linear and NlogN. – Bakuriu Mar 26 '13 at 17:57
  • 4
    It's mentioned here: http://www.slideshare.net/fullscreen/wesm/a-look-at-pandas-design-and-development/41 – HYRY Mar 26 '13 at 22:40
  • 1
    How would you preserve the ordering with `pandas.unique()`? As far as I can tell it does not allow any parameters. – F Lekschas Nov 23 '16 at 17:02
  • 3
    @F Lekschas, pandas.unique() seems to preserve the ordering as default – themachinist Apr 12 '18 at 09:05
  • 2
    @HYRY - The link is broken, need to remove the "/fullscreen": https://www.slideshare.net/wesm/a-look-at-pandas-design-and-development/41 – Alaa M. Jan 06 '23 at 12:35
27

Use the return_index functionality of np.unique. That returns the indices at which the elements first occurred in the input. Then argsort those indices.

>>> u, ind = np.unique(['b','b','b','a','a','c','c'], return_index=True)
>>> u[np.argsort(ind)]
array(['b', 'a', 'c'], 
      dtype='|S1')
Fred Foo
  • 355,277
  • 75
  • 744
  • 836
9
a = ['b','b','b','a','a','c','c']
[a[i] for i in sorted(np.unique(a, return_index=True)[1])]
YXD
  • 31,741
  • 15
  • 75
  • 115
4

If you're trying to remove duplication of an already sorted iterable, you can use itertools.groupby function:

>>> from itertools import groupby
>>> a = ['b','b','b','a','a','c','c']
>>> [x[0] for x in groupby(a)]
['b', 'a', 'c']

This works more like unix 'uniq' command, because it assumes the list is already sorted. When you try it on unsorted list you will get something like this:

>>> b = ['b','b','b','a','a','c','c','a','a']
>>> [x[0] for x in groupby(b)]
['b', 'a', 'c', 'a']
Jan Spurny
  • 5,219
  • 1
  • 33
  • 47
  • 2
    Almost all of the time `numpy` problems get solved way faster using `numpy`, pure python solutions will be slow since `numpy` is specialised. – jamylak Mar 26 '13 at 13:09
3
#List we need to remove duplicates from while preserving order

x = ['key1', 'key3', 'key3', 'key2'] 

thisdict = dict.fromkeys(x) #dictionary keys are unique and order is preserved

print(list(thisdict)) #convert back to list

output: ['key1', 'key3', 'key2']
2

If you want to delete repeated entries, like the Unix tool uniq, this is a solution:

def uniq(seq):
  """
  Like Unix tool uniq. Removes repeated entries.
  :param seq: numpy.array
  :return: seq
  """
  diffs = np.ones_like(seq)
  diffs[1:] = seq[1:] - seq[:-1]
  idx = diffs.nonzero()
  return seq[idx]
Albert
  • 65,406
  • 61
  • 242
  • 386
2

Use an OrderedDict (faster than a list comprehension)

from collections import OrderedDict  
a = ['b','a','b','a','a','c','c']
list(OrderedDict.fromkeys(a))
DanGoodrick
  • 2,818
  • 6
  • 28
  • 52