I am running into a numpy error numpy.core._exceptions.MemoryError
in my code. I have plenty of available memory in my machine so this shouldn't be a problem.
(This is on a raspberry pi armv7l, 4GB)
$ free
total used free shared buff/cache available
Mem: 3748172 87636 3384520 8620 276016 3528836
Swap: 1048572 0 1048572
I have found this post which suggested that I should allow overcommit_memory in the kernel, and so I did:
$ cat /proc/sys/vm/overcommit_memory
1
Now when I try to run this example:
import numpy as np
arrays = [np.empty((18, 602, 640), dtype=np.float32) for i in range(200)]
I get the same error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32
Why is python (or numpy) behaving in that way and how can I get it to work?
EDIT: Answers to questions in replies:
This is a 32bit system (armv7l)
>>> sys.maxsize
2147483647
I printed the approximate size (according to the error message each iteration should be 26.5MiB) at which the example fails:
def allocate_arr(i):
print(i, i * 26.5)
return np.empty((18, 602, 640), dtype=np.float32)
arrays = [allocate_arr(i) for i in range(0, 200)]
The output shows that this fails below at around 3GB of RAM allocated:
1 26.5
2 53.0
3 79.5
...
111 2941.5
112 2968.0
113 2994.5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
File "<stdin>", line 3, in allocate_arr
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32
Is 3GB the limit? Is there a way to increase that? Also isn't this the point of overcommitting?