0

1.Input: I have a dictionary list

{'1': [['a','b','c'],['f1','f1','f2']]}

2.Expected Result:

{'1': [['a','b','c'],['f1','f2']]}
  1. I want the repeated values of nested dictionary be removed and only unique values be kept in the dictionary.

4.Code I tried:

df_t = pd.DataFrame(df)
df_d= df_t[(x.values())].drop_duplicates()
df_d

Error: KeyError: TypeError: unhashable type: 'list'

Anubhav
  • 157
  • 1
  • 8
  • if you do not mind the data type change and the possible order loss, convert the inner `list`s to `set`s. The duplicates will be automatically removed. – Ma0 Jun 22 '21 at 06:16

1 Answers1

2

You can use map and a custom function to solve this,

Define the function to drop dupes like this,

def remove_dupes(l):
    return [list(set(item)) for item in l]

And use this a map over the dict like this,

d = {'1': [['a','b','c'],['f1','f1','f2']]}
dict(map(lambda x: (x[0], remove_dupes(x[1])), d.items()))

# output: {'1': [['a', 'c', 'b'], ['f2', 'f1']]}
Sreeram TP
  • 11,346
  • 7
  • 54
  • 108