So I have an OpenCV webcam feed that I'd like to read frames from as quickly as possible. Because of the Python GIL, the fastest rate at which my script could read in frames seems to be the following:
#Parent or maybe client(?) script
#initilize the video capture object
cam = cv2.VideoCapture(0)
while True:
ret, frame = cam.read()
# Some code to pass this numpy frame array to another python script
# (which has a queue) that is not under time constraint and also
# is doing some more computationally intensive post-processing...
if exit_condition is True:
break
What I'd like to have happen is to have these frames (Numpy Arrays) added to some kind of processing queue in a second Python script (or perhaps a multiprocessing instance?) which will then do some post-processing that is not under the time constraints like the cam.read() loop is...
So the basic idea would look something like:
Real-time (or as fast as I can get) data collection(camera read) script ----> Analysis script (which would do post-processing, write results, and produce matplotlib plots that lags a bit behind the data collection)
I've done some research and it seems like things like: pipes, sockets, pyzmq, and python multiprocessing all might be able to achieve what I'm looking for. Problem is I have no experience with any of the above.
So my question is what method will best be able to achieve what I'm looking for and can anyone provide a short example or even some thoughts/ideas to point me in the right direction?
Many thanks!
EDIT: Many thanks to steve for getting me started on the right track. Here's a working gist of what I had in mind... the code as it is works but if more post-processing steps are added then the queue size will likely grow faster than the main process can work through it. The suggestion of limiting frame rate is likely going to be the strategy I'll end up using.
import time
import cv2
import multiprocessing as mp
def control_expt(connection_obj, q_obj, expt_dur):
def elapsed_time(start_time):
return time.clock()-start_time
#Wait for the signal from the parent process to begin grabbing frames
while True:
msg = connection_obj.recv()
if msg == 'Start!':
break
#initilize the video capture object
cam = cv2.VideoCapture(cv2.CAP_DSHOW + 0)
#start the clock!!
expt_start_time = time.clock()
while True:
ret, frame = cam.read()
q_obj.put_nowait((elapsed_time(expt_start_time), frame))
if elapsed_time(expt_start_time) >= expt_dur:
q_obj.put_nowait((elapsed_time(expt_start_time),'stop'))
connection_obj.close()
q_obj.close()
cam.release()
break
class test_class(object):
def __init__(self, expt_dur):
self.parent_conn, self.child_conn = mp.Pipe()
self.q = mp.Queue()
self.control_expt_process = mp.Process(target=control_expt, args=(self.child_conn, self.q, expt_dur))
self.control_expt_process.start()
def frame_processor(self):
self.parent_conn.send('Start!')
prev_time_stamp = 0
while True:
time_stamp, frame = self.q.get()
#print (time_stamp, stim_bool)
fps = 1/(time_stamp-prev_time_stamp)
prev_time_stamp = time_stamp
#Do post processing of frame here but need to be careful that q.qsize doesn't end up growing too quickly...
print (int(self.q.qsize()), fps)
if frame == 'stop':
print 'destroy all frames!'
cv2.destroyAllWindows()
break
else:
cv2.imshow('test', frame)
cv2.waitKey(30)
self.control_expt_process.terminate()
if __name__ == '__main__':
x = test_class(expt_dur = 60)
x.frame_processor()