1

I read this [post] (Right way to share opencv video frame as Numpy array between multiprocessing processes) and tried to complete the implementation. The camera is taking images just find, the shapes of images, buffers and received image in another process are matching. But the received image shows black and white noise lines. I tried adding locks before the read/write Array to no avail.

Here is the code. Basically I want to put an image into a numpy and then into an Array so another process can read the image:

class VideoWorker(Process):
    def __init__(self, shared_Array, shape,width, height, fps):
        super(VideoWorker, self).__init__()
        # passing width /height as want to write video file later...
        self.width = width
        self.height = height
        self.fps = fps
        self.shared_Array = shared_Array
        self.shape = shape

    def run(self):
        name = "VideoWorker"
        print ('%s %s' % (name, self.name))

        cv2.namedWindow(name,cv2.WINDOW_NORMAL)
        cv2.resizeWindow(name,640,480)

        while True:
            img = np.frombuffer(self.shared_Array,dtype=ctypes.c_uint8)
            print("%s : got img shape %s " % (name, str(img.shape)))
            cv2.imshow(name, img)

            if cv2.waitKey(20) & 0xFF == ord('q'):
                break

        print("%s: done" %name)

if __name__ == '__main__':
    camera = cv2.VideoCapture(0)
    camera.set(cv2.CAP_PROP_FRAME_WIDTH,1280)
    camera.set(cv2.CAP_PROP_FRAME_HEIGHT,720)
    width = camera.get(cv2.CAP_PROP_FRAME_WIDTH)
    height = camera.get(cv2.CAP_PROP_FRAME_HEIGHT)
    fps = camera.get(cv2.CAP_PROP_FPS)

    cv2.namedWindow("orig",cv2.WINDOW_NORMAL)
    cv2.resizeWindow("orig",640,480)
    cv2.namedWindow("loop",cv2.WINDOW_NORMAL)
    cv2.resizeWindow("loop",640,480)

    grabbed, frame = camera.read()
    shape = frame.shape
    cv2.imshow("orig",frame)
    print("main: shape ",shape, "size ", frame.size, "fps ",fps)

    # size is L x W x channels
    shared_Array = Array(ctypes.c_uint8, shape[0] * shape[1] *shape[2], lock=False)

    worker = VideoWorker(shared_Array, shape, width, height, fps )
    worker.start()

    print("main: reshape size ",shape[0]*shape[1]*shape[2])

    while True:
        buf = np.frombuffer(shared_Array,dtype=np.uint8)
        print("main: frombuffer shape ",buf.shape)

        buf = buf.reshape(shape)
        print("main: loop buf reshape ",buf.shape)

        grabbed, frame = camera.read()
        cv2.imshow("loop",frame)
        print ("main: frame shape ",frame.shape)

        if not grabbed:
            break

        buf[:] = frame

        if worker.is_alive() == False:
            break

        if cv2.waitKey(20) &  0xFF == ord('q'):
            break

    print("Main process done")
    worker.join()
    camera.release()
    cv2.destroyAllWindows()

The output is two good windows, and one black/white stripped window, plus the following (trimmed):

VideoWorker VideoWorker-1
VideoWorker : got img shape (2764800,)
VideoWorker: done
main: shape  (720, 1280, 3) size  2764800 fps  30.0
main: reshape size  2764800
main: frombuffer shape  (2764800,)
main: loop buf reshape  (720, 1280, 3)
main: frame shape  (720, 1280, 3)
main: frombuffer shape  (2764800,)
main: loop buf reshape  (720, 1280, 3)
main: frame shape  (720, 1280, 3)
main: frombuffer shape  (2764800,)
main: loop buf reshape  (720, 1280, 3)
main: frame shape  (720, 1280, 3)
main: frombuffer shape  (2764800,)
main: loop buf reshape  (720, 1280, 3)
main: frame shape  (720, 1280, 3)
Main process done

A bit stuck on sharing frames on Arrays. I have Queues working just fine. First post to stackoverflow. Suggestions?

Henry Woody
  • 14,024
  • 7
  • 39
  • 56
PeterB
  • 156
  • 1
  • 5
  • Not sure why class VideoWorker(Process): and if name == 'main': didn't get pasted into the code. All indentation is correct in the code also. – PeterB Oct 22 '18 at 16:16
  • I maybe had similar issues. Try to save your numpy arrays as images (use OpenCV's imwrite or skimage) as png or tif and check, if the images are correct "under the hood", and only the OpenCV display function i.e. Qt or whatever it is using is messing it up. – anki Oct 22 '18 at 18:58
  • How do you guarantee that the worker is not trying to read the data at the same time as the main process writes it? I don't see any kind of synchronization there. – Dan Mašek Oct 22 '18 at 19:45

4 Answers4

1

I figured it out. Yes, as Dan pointed out I had to put locking in (tried it once before). Also I had to get the types and sizes correct. The reshape likes h x w x c and I am used to w x h x c. Here is a working solution without the loop, where both processes display the same opencv3 image via Array.

import cv2
import multiprocessing as mp
import numpy as np
import ctypes
import time


class Worker(mp.Process):
    def __init__(self,sharedArray,lock, width, height, channels):
        super(Worker, self).__init__()
        self.s=sharedArray
        self.lock = lock
        self.w = width
        self.h = height
        self.c = channels
        return

    def run(self):
        print("worker running")

        self.lock.acquire()
        buf = np.frombuffer(self.s.get_obj(), dtype='uint8')
        buf2 = buf.reshape(self.h, self.w, self.c)
        self.lock.release()

        print("worker ",buf2.shape, buf2.size)
        cv2.imshow("worker",buf2)
        cv2.waitKey(-1)


if __name__ == '__main__':

    img = cv2.imread('pic640x480.jpg')
    shape = img.shape
    size = img.size
    width = shape[1]
    height = shape[0]
    channels = shape[2]

    realsize = width * height * channels
    print('main ', shape, size, realsize)

    s = mp.Array(ctypes.c_uint8, realsize)
    lock = mp.Lock()
    lock.acquire()
    buf = np.frombuffer(s.get_obj(), dtype='uint8')
    buf2 = buf.reshape(height, width, channels)
    buf2[:] = img
    lock.release()

    worker = Worker(s,lock,width, height, channels)
    worker.start()

    cv2.imshow("img",img)
    cv2.waitKey(-1)

    worker.join()
    cv2.destroyAllWindows()

Thanks for the comments.

PeterB
  • 156
  • 1
  • 5
0

Try adding

img = img.reshape(self.shape)

to your run method just below the np.frombuffer line

img seems to have the wrong shape and is thus misinterpreted.

MSpiller
  • 3,500
  • 2
  • 12
  • 24
0

BTW, I gave up on this approach. I determined that on a Raspberry Pi 3B, the overhead of sending 1280x720 images across process address spaces was too much The CPU was pegged at 98% just moving frames around. I fell back to Threads and appear to have gained a few percent improvement in performance over single threaded code.

PeterB
  • 156
  • 1
  • 5
0

I had a similar problem. I had 4 cameras streaming from the network. The image frames changed and stacked. Queues were too slow for me so I solved the whole thing with pipes. Queues are based on that. 4 cameras stacked in a 4 tile and processed with multiprocessing. Works very well in my opinion. I also have no more delay by using pipes instead of queues.

steges
  • 1
  • 2