1

I have a multiprocessing application, where I want to pass numpy array data back and forth. The idea is to use a SharedMemoryManager.SharedMemory() and have numpy arrays pointing to the same buffer of this shared memory. Using locks should avoid race conditions.

Here is the important part of the code:

creating the process

buffer = self.data_manager.SharedMemory(size=(w * h * c))
queue = self.data_manager.Queue()
lock = self.data_manager.Lock()
stop_event = self.data_manager.Event()

frame_buffer = np.ndarray(
    (h, w, c),
    dtype=np.uint8,
    buffer=buffer.buf,
)

proc = Process(
    target=run_camera,
    name=camera_request.name,
    kwargs={
        "command_queue": queue,
        "lock": lock,
        "stop_event": stop_event,
        "shared_buffer": buffer,
        # other parameters
    },
)


proc.start()

target function

def run_camera(
    command_queue: Queue[CameraCommand],
    lock: Lock,
    stop_event: Event,
    shared_buffer: shared_memory.SharedMemory,
    # other parameters

) -> None:

    frame_buffer = np.ndarray(
        (h, w, c),
        dtype=np.uint8,
        buffer=shared_buffer.buf,
    )
    # do stuff with the array

The problem is, that as soon as the new process is started I get the following error:

Traceback (most recent call last): File "", line 1, in
File "C:\path\to\Miniconda3\envs\trumpf\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel)
File "C:\path\to\Miniconda3\envs\trumpf\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent)
File "C:\path\to\Miniconda3\envs\trumpf\lib\multiprocessing\shared_memory.py", line 161, in init h_map = _winapi.OpenFileMapping( FileNotFoundError: [WinError 2] The system cannot find the file specified: 'wnsm_3ab8096f'

After searching for quite a while, I still can't figure out what the reason for this issue is. Even weirder is, that if I place a breakpoint at the line proc.start() and simply let the program continue after hitting it, the error does not appear. This initially had me thinking it was some timing issue, however testing around with time.sleep() didn't change anything so far.

Roland Deschain
  • 2,211
  • 19
  • 50

1 Answers1

1

Here is how I was able to sort out the problem:

It seems (I'm wondering if anyone can confirm this) like the following was happening - at least this is what I can gather from debugging.

The code creating the process above was executed within a class method. Since the shared memory buffer was destroyed at the end of the method buffer.close() was called automatically. With proc.start() being the last line of the method, this seemed to be happening while the new process was started. From my understanding passing a SharedMemory to a new process, will cause a new instantiation with the same name as the passed instance. Since, in my case closed() was called before, this name wouldn't exist anymore, causing the Exception.

The simple solution to the problem was to keep buffer around as a class member. Now the crash doesn't happen anymore.

As mentioned, I'm curious if anyone can confirm if my understanding of the issue is correct. If so, should this even be possible?

Roland Deschain
  • 2,211
  • 19
  • 50
  • 1
    [(old answer of mine)](https://stackoverflow.com/a/63717188/3220135) This is expected behavior on windows that the file (memory mapped file which backs the shared memory) is immediately removed if there are no currently open handles. No explicit call to `unlink` is needed (`shm.unlink` is actually a noop on windows). If you create the `shm` in a function scope and don't keep a reference, it will be garbage collected when the function returns and the variable goes out of scope. – Aaron Oct 25 '22 at 15:28
  • This looks like a bug in the Windows implementation of python - with linux, you do not need to keep the buffer reference around - the behavior does not match the documentation at docs.python.org – ernstkl Jun 15 '23 at 04:39