16

I am writing code for an application where performance is important. I am wondering why defaultdict seems to be faster then setdefault.

I would like to be able to use setdefault, mostly because i do not like the print output of the nested defaultdict (see implementation below).

In my code, i need to test if element_id is already a key of the dict.

Here are the two functions that i am testing:

def defaultdictfunc(subcases,other_ids,element_ids):
    dict_name= defaultdict(lambda: defaultdict(lambda: defaultdict(dict)))
    for subcase in subcases:
        for other_id in other_ids:
            for element_id in element_ids: 
                if element_id in dict_name[subcase][other_id]:
                    # error duplicate element_id
                    pass
                else:
                    dict_name[subcase][other_id][element_id]=0
    return dict_name

def setdefaultfunc(subcases,other_ids,element_ids):
    dict_name={}
    for subcase in subcases:
        for other_id in other_ids:
            for element_id in element_ids: 
                if element_id in dict_name.setdefault(subcase,{}).setdefault(other_id,{}):
                    # error duplicate element_id
                    pass
                else:
                    dict_name[subcase][other_id][element_id]=0

    return dict_name

IPython input and output:

In [1]: from numpy.random import randint

In [2]: subcases,other_ids,element_ids=(randint(0,100,100),randint(0,100,100),randint(0,100,100))

In [5]: from collections import defaultdict

In [6]: defaultdictfunc(subcases,other_ids,element_ids)==setdefaultfunc(subcases,other_ids,element_ids)
Out[6]: True

In [7]: %timeit defaultdictfunc(subcases,other_ids,element_ids)
10 loops, best of 3: 177 ms per loop

In [8]: % timeit setdefaultfunc(subcases,other_ids,element_ids)
1 loops, best of 3: 351 ms per loop

Why is setdefaultfunc slower. I thought the underlying code would be the same. Is there a way to improve its speed?

Thanks

snowleopard
  • 717
  • 8
  • 19
  • 1
    [The docs](https://docs.python.org/3/library/collections.html#defaultdict-examples), do say, "[using `defaultdict`] is simpler and faster than an equivalent technique using `dict.setdefault()`." To work around the printing issue, it is easy (and quick) to convert a `defaultdict` back to a regular `dict` by using `dict(dict_name)`. – unutbu Jul 28 '16 at 01:55
  • 10
    It would make sense that `defaultdict` is faster that `dict.setdefault()` since the former sets its default for the entire dict at creation time, whereas `setdefault()` does it per element when it is read. One reason to use `setdefault` is when the default you assign is based on the key (or something) rather than a generic default for the entire dict. – aneroid Jul 28 '16 at 08:04
  • Thanks guys, makes perfect sense now. – snowleopard Jul 29 '16 at 21:51
  • https://stackoverflow.com/questions/12925052/python-and-default-dict-how-to-pprint – Peter Wood Sep 23 '19 at 17:00

3 Answers3

4

According to user aneroid in this comment:

It would make sense that defaultdict is faster that dict.setdefault() since the former sets its default for the entire dict at creation time, whereas setdefault() does it per element when it is read. One reason to use setdefault is when the default you assign is based on the key (or something) rather than a generic default for the entire dict.

snakecharmerb
  • 47,570
  • 11
  • 100
  • 153
3
val = 20_000_000


def defaultdict():
    """
    defaultdict 1000000: 0.4460279941558838
    defaultdict 10000000: 4.371468782424927
    defaultdict 20000000: 8.807381391525269
    """
    from collections import defaultdict
    import time
    a = defaultdict(list)
    t = time.time()
    for i in range(val):
        key = i % (val / 2)
        a[key].append(i)
    print(f'defaultdict {val}:', time.time() - t)


def setdefault():
    """
    setdefault 1000000: 0.3767530918121338
    setdefault 10000000: 4.230009078979492
    setdefault 20000000: 8.19938588142395
    """
    import time
    a = {}
    t = time.time()
    for i in range(val):
        key = i % (val / 2)
        a.setdefault(key, []).append(i)
    print(f'setdefault {val}:', time.time() - t)

Python 3.10

Vladimir
  • 6,162
  • 2
  • 32
  • 36
1

The setdefaultfunc is worst because you call the dict constructor several times in the loop (since {} is equivalent to dict()), while defaultdict avoid this by its own design.

With a small change, you can easily improve the setdefaultfunc:

def setdefaultfunc2(subcases,other_ids,element_ids):
    dict_name={}
    for subcase in subcases:
        subcase_dict = dict_name.setdefault(subcase,{})
        for other_id in other_ids:
            other_id_dict = subcase_dict.setdefault(other_id,{})
            for element_id in element_ids: 
                if element_id in other_id_dict:
                    # error duplicate element_id
                    pass
                else:
                    other_id_dict[element_id]=0
    return dict_name

With this change, the results in my machine were:

In [37]: defaultdictfunc(subcases,other_ids,element_ids)==setdefaultfunc2(subcases,other_ids,element_ids)
Out[37]: True

In [38]: %timeit defaultdictfunc(subcases,other_ids,element_ids)
286 ms ± 8.55 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [39]: %timeit setdefaultfunc(subcases,other_ids,element_ids)
434 ms ± 1.78 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [40]: %timeit setdefaultfunc2(subcases,other_ids,element_ids)
174 ms ± 348 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

IMO, defaultdict does not provide enought performance gain to make worth using it.

Diego Queiroz
  • 3,198
  • 1
  • 24
  • 36
  • Doesn't look like a proper comparison. What are the times when you optimize the defaultdict solution the same way? – superb rain Nov 15 '20 at 13:31
  • @superbrain Your request does not make sense for me. When you use defaultdict (instead of setdefault), you don't control when the dict is instantiated (you pass a reference to the class in the constructor, and it handles the rest when needed). So, if you made a single complex constructor (as "defaultdictfunc" function does), or if you use several simple constructors in each iteration (as a "defaultdictfunc2" would), the results will be the same, or even worst. I didn't test indeed, but I feel no need to try due the lack of evidence this would improve anything. – Diego Queiroz Nov 16 '20 at 14:25
  • In `defaultdictfunc`, you reaccess `dict_name[subcase][other_id]` over and over again in the innermost loop. For a proper comparison, you should store that in a variable `other_id_dict` like you do in `setdefaultfunc2`. And `subcase_dict` as well, though that has a much smaller impact. – superb rain Nov 16 '20 at 14:45
  • @superbrain The access operation of a Python dict is constant, that is, O(1). There is no performance impact in accessing it directly or through an auxiliar variable. Check this: https://stackoverflow.com/questions/1963507/time-complexity-of-accessing-a-python-dict – Diego Queiroz Nov 23 '20 at 22:00
  • O(1) is still slower than O(0). Doing *something*, no matter how fast, still takes more time than doing *nothing*. It *does* have a performance impact, and you'd see it if you tried it. – superb rain Nov 23 '20 at 22:31
  • Err, actually, O(1) and O(0) are both constants, so they are the same complexity in Big-O notation. O(1) is just a common reference for a constant time algorithm (see https://en.m.wikipedia.org/wiki/Time_complexity#Table_of_common_time_complexities ). Anyway, I'll be happy to upvote your answer if you extend mine with a new untest situation that proves something. Be bold! – Diego Queiroz Nov 25 '20 at 00:40
  • 2
    O(1) and O(0) are *not* the same. For example the constant 1-function is in O(1) but *not* in O(0). But you don't even need to understand that. Again: Doing something takes more time than doing nothing. It's as simple as that. – superb rain Nov 25 '20 at 01:22