Since you have not provided any code, my assumption is this: When cv2.videoCapture
is attempting to retrieve a frame but the network cuts off, it freezes for X amount of seconds and stalls your program until it either timeouts or a frame is finally retrieved. I'm also assuming that your entire program is running in a single giant loop, in order words, your program is running synchronically and dependent on one another. Essentially this question can be reworded to: How do we capture RTSP camera frames asynchronically?
This is a classic programming model where we can use threading to handle heavy I/O operations. Your current situation is that you have multiple camera streams but if any one camera fails, it stalls the entire application. The reason why your application halts when a camera doesn't function is because accessing the webcam/stream/camera using cv2.VideoCapture().read()
is a
blocking operation, meaning our main program is stalled until a frame is read from the buffer and returned. The solution is simple: We can use threading to improve performance by alleviating the heavy I/O operations to a separate independent thread. The idea is to spawn another thread to handle polling the frames in parallel instead of relying on a single thread (our 'main' thread') that grabs the frames in sequential order. By using this approach, once the root thread finishes processing a frame, it simply needs to grab the current frame from the I/O thread without having to wait for blocking I/O operations.
The benefit of this approach is that if any camera dies, it will only stall the operation in that specific I/O thread without having any effect on the main program. With this method, it doesn't matter if any single camera experiences a technical issue since all the blocking I/O operation is in an individual thread instead of the main application's thread. You also mentioned:
I don't want to use multiprocessing ... I want to find a solution using only OpenCV
You want to be using threading instead of multiprocessing. The difference is that threads share the same memory space, while processes have their own independent memory stacks and do not share it with the main thread. This makes it a bit harder to share objects between processes with multiprocessing. Also I don't think it's possible to have a solution using only OpenCV due to the fact that cv2.videoCapture
is a blocking operation. With that being said, the idea is to create a new thread for each camera that does nothing but poll for new frames while our main thread handles processing the current frame. You can create a new camera object for each RTSP stream
from threading import Thread
import cv2, time
class VideoStreamWidget(object):
def __init__(self, src=0):
self.capture = cv2.VideoCapture(src)
# Start the thread to read frames from the video stream
self.thread = Thread(target=self.update, args=())
self.thread.daemon = True
self.thread.start()
def update(self):
# Read the next frame from the stream in a different thread
while True:
if self.capture.isOpened():
(self.status, self.frame) = self.capture.read()
time.sleep(.01)
def show_frame(self):
# Display frames in main program
cv2.imshow('frame', self.frame)
key = cv2.waitKey(1)
if key == ord('q'):
self.capture.release()
cv2.destroyAllWindows()
exit(1)
if __name__ == '__main__':
video_stream_widget = VideoStreamWidget()
while True:
try:
video_stream_widget.show_frame()
except AttributeError:
pass
For an implementation to handle multiple camera streams, take a look at capture multiple camera streams with OpenCV
For other similar threading and streaming from RTSP cameras