Edit: reverse the logic to make the meaning clearer:
Another alternative would be to do something like this:
seen = dict()
seen_setdefault = seen.setdefault
new_row = ["" if cell in seen else seen_setdefault(cell, cell) for cell in row]
To give an example:
>>> row = ["to", "be", "or", "not", "to", "be"]
>>> seen = dict()
>>> seen_setdefault = seen.setdefault
>>> new_row = ["" if cell in seen else seen_setdefault(cell, cell) for cell in row]
>>> new_row
['to', 'be', 'or', 'not', '', '']
Edit 2: Out of curiosity I ran a quick test to see which approach was fastest:
>>> from random import randint
>>> from statistics import mean
>>> from timeit import repeat
>>>
>>> def standard(seq):
... """Trivial modification to standard method for removing duplicates."""
... seen = set()
... seen_add = seen.add
... return ["" if x in seen or seen_add(x) else x for x in seq]
...
>>> def dedup(seq):
... seen = set()
... for v in seq:
... yield '' if v in seen else v
... seen.add(v)
...
>>> def pedro(seq):
... """Pedro's iterator based approach to removing duplicates."""
... my_dedup = dedup
... return [x for x in my_dedup(seq)]
...
>>> def srgerg(seq):
... """Srgerg's dict based approach to removing duplicates."""
... seen = dict()
... seen_setdefault = seen.setdefault
... return ["" if cell in seen else seen_setdefault(cell, cell) for cell in seq]
...
>>> data = [randint(0, 10000) for x in range(100000)]
>>>
>>> mean(repeat("standard(data)", "from __main__ import data, standard", number=100))
1.2130275770426708
>>> mean(repeat("pedro(data)", "from __main__ import data, pedro", number=100))
3.1519048346103555
>>> mean(repeat("srgerg(data)", "from __main__ import data, srgerg", number=100))
1.2611971098676882
As can be seen from the results, making a relatively simple modification to the standard approach described in this other stack-overflow question is fastest.