I was interested in the performance of the proposed Counter, reduce and sum methods for large lists. Maybe someone else is interested in this as well.
You can have a look here: https://gist.github.com/torstenrudolf/277e98df296f23ff921c
I tested the three methods for this list of dictionaries:
dictList = [{'a': x, 'b': 2*x, 'c': x**2} for x in xrange(10000)]
the sum method showed the best performance, followed by reduce and Counter was the slowest. The time showed below is in seconds.
In [34]: test(dictList)
Out[34]:
{'counter': 0.01955194902420044,
'reduce': 0.006518083095550537,
'sum': 0.0018319153785705566}
But this is dependent on the number of elements in the dictionaries. the sum method will slow down faster than the reduce.
l = [{y: x*y for y in xrange(100)} for x in xrange(10000)]
In [37]: test(l, num=100)
Out[37]:
{'counter': 0.2401433277130127,
'reduce': 0.11110662937164306,
'sum': 0.2256883692741394}