Adapting How do you remove duplicates from a list in whilst preserving order? to your list:
seen = set()
block = [row for row in block if row[0] not in seen and not seen.add(row[0])]
This rebuilds block
to only contain rows that have a unique first element; so only the first row with a given first value is kept.
To keep just the unique rows and remove all rows that have more than one entry, you need to use a collections.Counter()
object to track how many times each first element is present, then trim block
:
from collections import Counter
counts = Counter(row[0] for row in block)
block = [row for row in block if counts[row[0]] == 1]
Demo:
>>> from pprint import pprint
>>> from collections import Counter
>>> block = [
... ['alfa', 'T31360N', '2013-12-19 12:07:2'],
... ['beta', 'D41535N', '2013-12-19 12:20:1'],
... ['gamma', 'E61460N', '2013-12-19 13:58:2'],
... ['delta', 'D133PR01', '2013-12-19 14:19:4'],
... ['beta', 'Q3332N', '2013-12-19 14:19:5']
... ]
>>> seen = set()
>>> pprint([row for row in block if row[0] not in seen and not seen.add(row[0])])
[['alfa', 'T31360N', '2013-12-19 12:07:2'],
['beta', 'D41535N', '2013-12-19 12:20:1'],
['gamma', 'E61460N', '2013-12-19 13:58:2'],
['delta', 'D133PR01', '2013-12-19 14:19:4']]
>>> counts = Counter(row[0] for row in block)
>>> pprint([row for row in block if counts[row[0]] == 1])
[['alfa', 'T31360N', '2013-12-19 12:07:2'],
['gamma', 'E61460N', '2013-12-19 13:58:2'],
['delta', 'D133PR01', '2013-12-19 14:19:4']]