As Ignacio points out, this is due to Python allocating a bit more memory than required. This is done in order to perform O(1)
.appends
on lists.
sorted
creates a new list out of the sequence provided, sorts it in place and returns it. To create the new list, Python extends an empty sized list with the one passed; that results in the observed over-allocation (which happens after calling list_resize
). You can corroborate the fact that sorting isn't the culprit by using list.sort
; the same algorithm is used without a new list getting created (or, as it's known, it's performed in-place). The sizes there, of course, don't differ.
It is worth noting that this difference is mostly present when:
So, with a list-comp:
l = [i for i in range(10)]
getsizeof(l) # 192
getsizeof(sorted(l)) # 200
Or a list literal:
l = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
getsizeof(l) # 144
getsizeof(sorted(l)) # 200
the sizes are smaller (more so with use of literals).
When creating through list
, the memory is always over-allocated; Python knows the sizes and preempts future modifications by over-allocating a bit based on the size:
l = list(range(10))
getsizeof(l) # 200
getsizeof(sorted(l)) # 200
So you get no observed difference in the sizes of the list(s).
As a final note, I must point out that this is behavior specific the C
implementation of Python i.e CPython. It's a detail of how the language was implemented and as such, you shouldn't depend on it in any wacky way.
Jython, IronPython, PyPy and any other implementation might/might not have the same behavior.