16

Say my dataframe is:

df = pandas.DataFrame([[[1,0]],[[0,0]],[[1,0]]])

which yields:

        0
0  [1, 0]
1  [0, 0]
2  [1, 0]

I want to drop duplicates, and only get elements [1,0] and [0,0], if I write:

df.drop_duplicates()

I get the following error: TypeError: unhashable type: 'list'

How can I call drop_duplicates()?

More in general:

df = pandas.DataFrame([[[1,0],"a"],[[0,0],"b"],[[1,0],"c"]], columns=["list", "letter"])

And I want to call df["list"].drop_duplicates(), so drop_duplicates applies to a Series and not a dataframe?

user
  • 2,015
  • 6
  • 22
  • 39

4 Answers4

11

You can use numpy.unique() function:

>>> df = pandas.DataFrame([[[1,0]],[[0,0]],[[1,0]]])
>>> pandas.DataFrame(np.unique(df), columns=df.columns)
        0
0  [0, 0]
1  [1, 0]

If you want to preserve the order checkout: numpy.unique with order preserved

Mazdak
  • 105,000
  • 18
  • 159
  • 188
  • @user And [simple is better than complex](https://www.python.org/dev/peps/pep-0020/) ;). – Mazdak May 18 '18 at 20:03
  • @user and if you think one answer is better than the others, it is better to "accept" it so others can know what solution worked best. – Omid May 19 '18 at 12:53
  • 1
    @Omid All answers were great and all upvoted but this is the one I used for its simplicity – user May 20 '18 at 20:14
  • Seems like this or the tuple answer should be added to the pandas codebase. – wordsforthewise Jan 18 '20 at 17:00
8

drop_duplicates

Call drop_duplicates on tuplized data:

df[0].apply(tuple, 1).drop_duplicates().apply(list).to_frame()

        0
0  [1, 0]
1  [0, 0]

collections.OrderedDict

However, I'd much prefer something that doesn't involve apply...

from collections import OrderedDict
pd.Series(map(
    list, (OrderedDict.fromkeys(map(tuple, df[0].tolist()))))
).to_frame()

Or,

pd.Series(
    list(k) for k in OrderedDict.fromkeys(map(tuple, df[0].tolist()))
).to_frame()

        0
0  [1, 0]
1  [0, 0]
cs95
  • 379,657
  • 97
  • 704
  • 746
  • Why would you prefer something that doesn't involve apply? The code looks much more readable with apply. – wordsforthewise Jan 18 '20 at 17:08
  • @wordsforthewise the answer to that question is long but it is here: https://stackoverflow.com/questions/54432583/when-should-i-ever-want-to-use-pandas-apply-in-my-code – cs95 Jan 18 '20 at 19:05
5

I tried the other answers but they didn't solve what I needed (large dataframe with multiple list columns).

I solved it this way:

df = df[~df.astype(str).duplicated()]
Andreas
  • 8,694
  • 3
  • 14
  • 38
4

Here is one way, by turning your series of lists into separate columns, and only keeping the non-duplicates:

df[~df[0].apply(pandas.Series).duplicated()]

        0
0  [1, 0]
1  [0, 0]

Explanation:

df[0].apply(pandas.Series) returns:

   0  1
0  1  0
1  0  0
2  1  0

From which you can find duplicates:

>>> df[0].apply(pd.Series).duplicated()
0    False
1    False
2     True

And finally index using that

sacuL
  • 49,704
  • 8
  • 81
  • 106