3

I read in Usage of __slots__? that using __slots__ in Python can actually save time. But, when I tried to find the time taken using datetime, the results were contrary.

import datetime as t

class A():
    def __init__(self,x,y):
        self.x = x
        self.y = y

t1 = t.datetime.now()
a = A(1,2)
t2 = t.datetime.now()
print(t2-t1)

... gave output: 0:00:00.000011 And using slots:

import datetime as t

class A():
    __slots__ = 'x','y'
    def __init__(self,x,y):
        self.x = x
        self.y = y

t1 = t.datetime.now()
a = A(1,2)
t2 = t.datetime.now()
print(t2-t1)

... gave output: 0:00:00.000021

Using slots actually took longer. Why do we need to use __slots__ then?

smci
  • 32,567
  • 20
  • 113
  • 146
Ayush Shridhar
  • 115
  • 4
  • 11
  • 6
    I'd note that manually using `datetime` in this way isn't the recommended method for timing code. There's [a module meant for timing code snippets](https://docs.python.org/3/library/timeit.html). And (as mentioned in an answer) you _must_ repeat short operations many times to get anything approaching a meaningful result. – David Z Jul 27 '17 at 06:11

2 Answers2

20

__slots__ can save time (depends on Python version), but that's not usually what you use it for. What it really saves is memory. Instead of a __dict__ of fairly large size for every instance, you store attributes directly in the C struct backing the object, and the class itself stores a single copy of the lookup table mapping from names to struct offsets for each attribute. Even on modern Py3 x64 with key-sharing dicts, it's still 96 bytes for a key-sharing __dict__ where the class has a single instance attribute, on top of the 56 bytes for the object structure itself.

By using __slots__, you eliminate the 16 bytes for the pointers to the __dict__ and __weakref__ attributes, and eliminate the __dict__ entirely.

For comparison on Py 3.5:

>>> class Foo:
...    def __init__(self, x): self.x = x
...
>>> sys.getsizeof(Foo(1)) + sys.getsizeof(Foo(1).__dict__)
152
>>> class Foo:
...    __slots__ = 'x',
...    def __init__(self, x): self.x = x
...
>>> sys.getsizeof(Foo(1))  # With __slots__, doesn't have __dict__ at all
48

That's a savings of over 100 bytes per-instance; on Py2 (w/o key sharing dictionaries) the savings are even greater.

So it's not that __slots__ is faster in general (it's usually pretty similar), but if you're making millions of instances, saving 100+ B per instance might help you keep your code in cache, in RAM, etc., rather than running out of memory and paging out half your data to swap.

As the other answer notes, you never actually accessed your attributes, so you weren't benchmarking slot access at all, which is why you saw no difference at all. Using ipython3 %%timeit magic, I find that loading the x attribute of a given instance repeatedly is about 15% faster when it's slotted (33.5 ns with __slots__ vs. 39.2 ns without), but that's only noticeable in microbenchmarks; it rarely matters in real code (where the actual work is doing a lot more than just attribute lookup). Reducing memory usage by a factor of 2-3x is a much bigger gain when it matters.

ShadowRanger
  • 143,180
  • 12
  • 188
  • 271
13
  1. The article you quoted says using slots provides for faster attribute access - you tested the time of object creation, and never accessed attributes of your object.
  2. Testing a single operation is not statistically meaningful - measure the times of, say, 100000 operations.
Błotosmętek
  • 12,717
  • 19
  • 29