I have a Python application for real-time video-processing using OpenCV and a USB camera. The camera acquires 30 frames per second. The image processing is executed in parallel on two different cores at about 8 FPS each, with a total throughput of ~16 FPS.
I implemented this code with a size 1 queue that the main process writes into. Now the problem is that, as mentioned in other questions (like here), the camera images are stored in a fixed size FIFO buffer before being read with VideoCapture.read()
, causing lag in some applications. With the camera I am using I don't have control on this buffer. Therefore, since the buffer is filled at higher rate, the images sent to the child processes are not the latest ones.
I addressed this issue by skipping one frame for each iteration. However, the new acquiring rate is not exactly the same as the algorithm throughput. Also, the algorithm speed may change in the future. How can I make sure that the camera buffer is always empty, preventing undesired lag?
from multiprocessing import Process, Queue
import cv2
import os
VIDEO_DEVICE = 0
SKIP_N_FRAMES = 1
def img_process(img_queue):
while True:
img = img_queue.get()
do_some_processing(img)
def get_frame(cap):
# skip one frame
for i in range(SKIP_N_FRAMES):
cap.grab()
retval, frame = cap.read()
return frame
if __name__ == '__main__':
img_queue = Queue(1)
cap = cv2.VideoCapture(VIDEO_DEVICE)
p1 = Process(target=img_process, args=(img_queue,))
p2 = Process(target=img_process, args=(img_queue,))
p1.start()
p2.start()
# assign the processes to separate cores
os.system("taskset -p -c %d %d" % (4, p1.pid))
os.system("taskset -p -c %d %d" % (5, p2.pid))
while cap.isOpened():
frame = get_frame(cap)
img_queue.put(frame)
p1.terminate()
p2.terminate()