4

I'd like to know the fastest way to write a big image from one process and pass it to another process. I have a child process which constantly read an image (shape: (1080, 1920, 3)) from a camera. From a parent process, I'd like to get the latest image when needed. The camera's refresh rate is 30 fps.

I tried to use multiprocessing Queue as follows, but I noticed that when putting the image into the queue, it took a long time and delayed the cycle of the process to 10 Hz or so.

#Parent process:
    self.queue = mp.Queue(maxsize=1)
    self.stop_process = mp.Event()
    self.process = mp.Process(target=child_process, args=(self.queue, self.stop_process))
    self.process.daemon = True
    self.process.start()

    # Later, get the queued image
    while True:
        image = self.queue.get(block=True)

#Child process:
def child_process(queue, stop):
     while not stop.is_set():
        try:
                image = retrieve_image()  # Using camera's API
                while True:
                    try:
                        queue.get(block=False)
                    except original_queue.Empty:
                        break
                queue.put(image)
        except:
            log.exception('Exception while reading')
            break

I also tried multiprocessing Pipe, but the result was same. Then, I tried not using queue or pipe, used shared Array instead (Mainly referred this: http://thousandfold.net/cz/2014/05/01/sharing-numpy-arrays-between-processes-using-multiprocessing-and-ctypes/). Here is the code. Unfortunately, it didn't improve the delay either. By the way, I couldn't use shape: (1080, 1920, 3) for the sharedctypes Array because when I set it, the program crashed with the error message like interrupted by signal 11 SIGSEGV. Therefore, I just used the flattened array and reshaped later, which I guess wouldn't be good for the performance.

#Parent process:
self.shared_array = sharedctypes.Array(np.ctypeslib.as_ctypes(np.zeros(1920*1080*3))._type_, np.ctypeslib.as_ctypes(np.zeros(1920*1080*3)), lock=True)    self.stop_process = mp.Event()
self.process = mp.Process(target=child_process, args=(self.queue, self.stop_process))
self.process.daemon = True
self.process.start()

# Later, get the image from shared array
image = np.frombuffer(self.shared_array.get_obj()).reshape((1080, 1920, 3)).astype('uint8')

#Child process:
def child_process(shared_array, stop):
   while not stop.is_set():
        try:
                shared_array[:] = retrieve_image()  # Using camera's API
        except:
                log.exception('Exception while reading')
                break

Now, I'm not sure what else I can try. I tried to find the similar solution on SO, but the answers (such as Use numpy array in shared memory for multiprocessing or How do I pass large numpy arrays between python subprocesses without saving to disk?) were old (sometimes recommends to use the unmaintained package:https://bitbucket.org/cleemesser/numpy-sharedmem/src). Also, these approaches look similar to what I tried with shared Array.

Please let me know if you have any suggestion or find any mistakes I made.

kangaroo
  • 407
  • 4
  • 19

0 Answers0