I've a nested dictionary such as this:
from collections import defaultdict, OrderedDict
dict = defaultdict(lambda:OrderedDict())
dict[1]['ab'] = 2343
dict[1]['ac'] = 6867
dict[1]['ad'] = 2345
dict[2]['sa'] = 2355
dict[2]['sg'] = 4545
dict[2]['sf'] = 2445
dict[3]['df'] = 9988
I want the count of the values for each keys. The count of each item will be run through an algorithm to determine where the next value needs to be added/removed. Right now I've this:
count = {}
for k, v in dict.items():
count[k] = len(v)
Scaling is important here as I'm dealing with very large databases. I've to have the count every time I do something with the dictionary and if I'm accessing it a million times, I'd have to create count
each time. Is there a more efficient/pythonic way to do this? May be create a custom class similar to this that keeps a count for each item as and when it is created/removed?