__slots__
can save time (depends on Python version), but that's not usually what you use it for. What it really saves is memory. Instead of a __dict__
of fairly large size for every instance, you store attributes directly in the C struct backing the object, and the class itself stores a single copy of the lookup table mapping from names to struct offsets for each attribute. Even on modern Py3 x64 with key-sharing dicts, it's still 96 bytes for a key-sharing __dict__
where the class has a single instance attribute, on top of the 56 bytes for the object structure itself.
By using __slots__
, you eliminate the 16 bytes for the pointers to the __dict__
and __weakref__
attributes, and eliminate the __dict__
entirely.
For comparison on Py 3.5:
>>> class Foo:
... def __init__(self, x): self.x = x
...
>>> sys.getsizeof(Foo(1)) + sys.getsizeof(Foo(1).__dict__)
152
>>> class Foo:
... __slots__ = 'x',
... def __init__(self, x): self.x = x
...
>>> sys.getsizeof(Foo(1)) # With __slots__, doesn't have __dict__ at all
48
That's a savings of over 100 bytes per-instance; on Py2 (w/o key sharing dictionaries) the savings are even greater.
So it's not that __slots__
is faster in general (it's usually pretty similar), but if you're making millions of instances, saving 100+ B per instance might help you keep your code in cache, in RAM, etc., rather than running out of memory and paging out half your data to swap.
As the other answer notes, you never actually accessed your attributes, so you weren't benchmarking slot access at all, which is why you saw no difference at all. Using ipython3
%%timeit
magic, I find that loading the x
attribute of a given instance repeatedly is about 15% faster when it's slotted (33.5 ns with __slots__
vs. 39.2 ns without), but that's only noticeable in microbenchmarks; it rarely matters in real code (where the actual work is doing a lot more than just attribute lookup). Reducing memory usage by a factor of 2-3x is a much bigger gain when it matters.