4

I was experiencing a ~5 sec delay when playing a rtsp stream from an IP camera. After a bunch of googling (especially this question), I reduced the delay to ~1 sec using the following command:

ffplay -fflags nobuffer -flags low_delay -framedrop -strict experimental \
       -probesize 32 -sync ext rtsp://xxx.xxx.xxx.xxx

But when I tried mplayer -benchmark command from the same question, I found the delay immediately goes away (i.e. almost 0 delay).

In the man page of mplayer, it sais:

-benchmark

Prints some statistics on CPU usage and dropped frames at the end of playback. Use in combination with -nosound and -vo null for benchmarking only the video codec.

NOTE: With this option MPlayer will also ignore frame duration when playing only video (you can think of that as infinite fps).

I feel this "ignore frame duration" is the key to the question, but after a bunch of googling, I didn't find any flag related to this in ffmpeg. I'm wondering how to force ignore input frame duration in ffmpeg?

On the other hand, the reason I'm using ffmpeg is because I need to do image processing using opencv, while I found it seems to be using some part of ffmpeg under-the-hood when doing

cv.VideoCapture('rtsp://xxx.xxx.xxx.xxx')

A solution that directly solves the problem in opencv would be even more appreciated. I did try reading the VideoCapture repeatedly in a separate thread, that didn't help.


Some info about the RTSP stream: h264, 1920x1080, 15fps, 1 key frame per 4s

Some other solutions I tried:

ffmpeg -r 99999 -i ...
# didn't work

mplayer ... -dumpstream
# it core dumped
Community
  • 1
  • 1
ZisIsNotZis
  • 1,570
  • 1
  • 13
  • 30

1 Answers1

0

Reading frames using VideoCapture() in a separate thread should increase performance due to I/O latency reduction. The read() operation is blocking so the main program is stalled until a frame is read from the camera stream. By placing the frame reading into a seprate thread, we should be able to grab and show frames in parallel instead of relying on a single thread to grab frames in sequential order. Replace src with your RTSP stream link.

Another reason there could be a delay could be because your resolution is 1920x1080. Before you show the frame, you could resize it which should give better performance.

from threading import Thread
import cv2, time

class VideoStreamWidget(object):
    def __init__(self, src=0):
        self.capture = cv2.VideoCapture(src)
        # Start the thread to read frames from the video stream
        self.thread = Thread(target=self.update, args=())
        self.thread.daemon = True
        self.thread.start()

    def update(self):
        # Read the next frame from the stream in a different thread
        while True:
            if self.capture.isOpened():
                (self.status, self.frame) = self.capture.read()
            time.sleep(.01)

    def show_frame(self):
        # Display frames in main program
        cv2.imshow('frame', self.frame)
        key = cv2.waitKey(1)
        if key == ord('q'):
            self.capture.release()
            cv2.destroyAllWindows()
            exit(1)

if __name__ == '__main__':
    video_stream_widget = VideoStreamWidget()
    while True:
        try:
            video_stream_widget.show_frame()
        except AttributeError:
            pass
nathancy
  • 42,661
  • 14
  • 115
  • 137
  • Thanks for the suggestion! But resizing didn't help, and the 1 second delay did not disappear using threading. As mentioned above, I believe this delay is definitely related to frame duration hidden inside `ffmpeg` used by `opencv`, i.e. the `.read()` call. Since `.read()` themselves cannot be done in parallel, making something else parallel won't help a lot. – ZisIsNotZis Jun 14 '19 at 04:05