0

I just thought of doing

mylist = list(set(mylist))

to remove all duplicate entries from mylist. However, chaining built-ins always feels a little hacky. I am wondering, what is the (most) pythonic/zen way of eliminating duplicates from a list?

While searching, I found the above constuct as an answer to the "eliminate duplicates"-problem here on stackoverflow. Nobody cried this is a bad idea, but that's only implying an answer and explicit is better than implicit.

Is the above construct the way of eliminating duplicates from a list (of hashable elements)?

If not, what is?

Reblochon Masque
  • 35,405
  • 10
  • 55
  • 80
user1129682
  • 989
  • 8
  • 27

1 Answers1

2

What makes something Pythonic if not being succinct and explicit? And this is exactly what your code does:

uniq = list(set(dups))

Convert a list to a set, which removes the duplicates since sets only contain unique values, and then turn it back into a list again. Chaining built-ins to accomplish a goal isn't hacky. It is pithy. Succinct. Elegant. It doesn't depend on any modules or libraries. Each operation is clear in what it does. The intent is easily discernible. Truly, this is the right solution.

Claudiu
  • 224,032
  • 165
  • 485
  • 680
  • That set property could be considered "implicit". That's actually what made me ask this question. But maybe I have been using pylint a little too excessively lately ... – user1129682 Mar 14 '14 at 17:34
  • @user1129682: Why implicit? You are explicitly creating a set, and a set is well-known to be defined as not containing duplicates (i.e. if you add an element to a set that's already there, the set will be unmodified). – Claudiu Mar 14 '14 at 17:35