Removing duplicates the simple way
A classic, efficient way to remove duplicates from a list in python is to build a set
from the list: removing duplicates in lists
list_with_dups = [1, 1, 2, 3, 2]
list_without_dups = list(set(list_with_dups))
You can apply this method repeatedly using a list comprehension:
list1 = [['a', 'b', 'a', 'b'], ['b', 'c', 'd', 'c'], ['a', 'c', 'c']]
without_duplicates = [list(set(sublist)) for sublist in list1]
# = [['b', 'a'], ['d', 'b', 'c'], ['c', 'a']]
Removing duplicates whilst conserving order
Applying How do you remove duplicates whilst conserving order? to a list of lists:
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
list1 = [['a', 'b', 'a', 'b'], ['b', 'c', 'd', 'c'], ['a', 'c', 'c']]
without_duplicates = [f7(sublist) for sublist in list1]
# = [['a', 'b'], ['b', 'c', 'd'], ['a', 'c']]