I am using Python 3.4 on Linux. I want to create an array in RAM that will be shared by processes spawned by the multiprocessing
module (i.e. shared memory). According to the documentation, this should be possible by using multiprocessing.Array
. When I use
array = multiprocessing.Array('i', N)
Python creates a file in /tmp
, zeroes it, and uses it through mmap.mmap()
as shared memory (I have verified this by looking into /usr/lib/python3.4/multiprocessing/heap.py
). However, I would like the array to be created truly in memory, not on disk. The reason is that in my use case, N
is very large (more than 100 GB), I have a lot of RAM but low disk capacity. The creation of multiprocessing.Array()
thus ends with IOError: No space left on device
because it uses a file on disk for the shared memory rather than RAM.
I was able to get around this by mounting a directory via tmpfs
and setting tempfile.tempdir
to point to this directory. However, is there an easier way? Why is Python creating shared memory in this way?