I am curious to know how memory management differs between Bytearray and list in Python.
I have found a few questions like Difference between bytearray and list but not exactly answering my question.
My question precisely ...
from array import array
>>> x = array("B", (1,2,3,4))
>>> x.__sizeof__()
36
>>> y = bytearray((1,2,3,4))
>>> y.__sizeof__()
32
>>> z = [1,2,3,4]
>>> z.__sizeof__()
36
As we can see there is a difference in sizes between list/array.array (36 bytes for 4 elements) and a byte array (32 bytes for 4 elements). Can someone explain to me why is this? It makes sense for byte array that it is occupying 32
bytes of memory for 4
elements ( 4 * 8 == 32 )
, but how can this be interpreted for list and array.array?
# Lets take the case of bytearray ( which makes more sense to me at least :p)
for i in y:
print(i, ": ", id(i))
1 : 499962320
2 : 499962336 #diff is 16 units
3 : 499962352 #diff is 16 units
4 : 499962368 #diff is 16 units
Why does the difference between two contiguous elements differ by 16
units here, when each element occupies only 8
bytes. Does that mean each memory address pointer points to a nibble?
Also what is the criteria for memory allocation for an integer? I read that Python will assign more memory based on the value of the integer (correct me if I am wrong) like the larger the number the more memory.
Eg:
>>> y = 10
>>> y.__sizeof__()
14
>>> y = 1000000
>>> y.__sizeof__()
16
>>> y = 10000000000000
>>> y.__sizeof__()
18
what is the criteria that Python allocates memory?
And why Python is occupying so much more memory while C
only occupies 8 bytes (mine is a 64 bit machine)? when they are perfectly under the range of integer (2 ** 64)
?
Metadata :
Python version : '3.4.3 (v3.4.3:9b73f1c3e601, Feb 24 2015, 22:43:06) [MSC v.1600 32 bit (Intel)]'
Machine arch : 64-bit
P.S : Kindly guide me to a good article where Python memory management is explained better. I had spent almost an hour to figure these things out and ended up asking this Question in SO. :(