I would like to remove duplicate dicts in list.
Specifically, if two dict having the same content under the key paper_title, maintain one and remove the other duplicate.
For example, given the list below
test_list = [{"paper_title": 'This is duplicate', 'Paper_year': 2}, \
{"paper_title": 'This is duplicate', 'Paper_year': 3}, \
{"paper_title": 'Unique One', 'Paper_year': 3}, \
{"paper_title": 'Unique two', 'Paper_year': 3}]
It should return
return_value = [{"paper_title": 'This is duplicate', 'Paper_year': 2}, \
{"paper_title": 'Unique One', 'Paper_year': 3}, \
{"paper_title": 'Unique two', 'Paper_year': 3}]
According to the tutorial, this can be achieved using list comprehension or frozenet. Such that
test_list = [{"paper_title": 'This is duplicate', 'Paper_year': 2}, \
{"paper_title": 'This is duplicate', 'Paper_year': 3}, \
{"paper_title": 'Unique One', 'Paper_year': 3}, \
{"paper_title": 'Unique two', 'Paper_year': 3}]
return_value= [i for n, i in enumerate(test_list) if i not in test_list[n + 1:]]
However,it return no duplicates
return_value = [{"paper_title": 'This is duplicate', 'Paper_year': 2}, \
{"paper_title": 'This is duplicate', 'Paper_year': 3}, \
{"paper_title": 'Unique One', 'Paper_year': 3}, \
{"paper_title": 'Unique two', 'Paper_year': 3}]
May I know, which part of the code, I should change?
Also, is there any more faster way to achieve similar result?