1

I want to receive byte-stream by using gstreamer with python subprocess module. Now I can successfully use ffmpeg to pull the byte-stream. As shown below.

import cv2
import subprocess as sp


height = 714
width = 420
rtsp_url = 'rtsp://127.0.0.1:8554/video'

# command
command = ['ffmpeg',
            '-i', rtsp_url,
            '-f', 'rawvideo',
            '-s',str(width)+'*'+str(height),
            '-pix_fmt', 'bgr24',
            '-fflags', 'nobuffer',
            '-']

p = sp.Popen(command, stdout=sp.PIPE, bufsize=10**8)

while True:
    raw_image = p.stdout.read(width*height*3)
    image =  np.fromstring(raw_image, dtype='uint8')
    image = image.reshape((height,width,3)).copy()
    cv2.imshow('image', image)
    key = cv2.waitKey(20)

I want to use gstreamer command instead of ffmpeg. So far, I have realized writing byte-stream to a file by using gstreamer command line.

gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/video latency=0 drop-on-latency=true ! rtph264depay ! video/x-h264, stream-format='byte-stream' ! filesink location=/home/name/stdout

But it can't output byte-stream to pipe, so the terminal dosen't display byte-stream, not like ffmpeg command. How to change this command to output byte-stream through pipe so I can read from pipe. Thank you for taking the time to answer for me!

This is RTSP streaming code.

import cv2
import time
import subprocess as sp
import numpy as np


rtsp_url = 'rtsp://127.0.0.1:8554/video'
video_path = r'test.mp4'
cap = cv2.VideoCapture(video_path)

# Get video information
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
print('fps={}'.format(fps))

# command
command = ['ffmpeg',
            '-re',
            '-y',
            '-stream_loop', '-1',
            '-f', 'rawvideo',
            '-vcodec', 'rawvideo',
            '-pix_fmt', 'bgr24',
            '-s', "{}x{}".format(width, height),
            '-r', str(fps),
            '-i', '-',
            '-c:v', 'libx264',
            '-pix_fmt', 'yuv420p',
            '-preset', 'ultrafast',
            # '-flags2', 'local_header',
            '-bsf:v', "'dump_extra=freq=k'", 
            '-keyint_min', '60',
            '-g', '60',
            '-sc_threshold', '0', 
            '-f', 'rtsp',
            '-rtsp_transport', 'tcp',
            '-muxdelay', '0.1', 
            rtsp_url]

p = sp.Popen(command, stdin=sp.PIPE)

cnt = 0
t_start = time.time()
while (cap.isOpened()):
    t_cur = time.time()-t_start

    ret, frame = cap.read()
    if not ret:
        cnt += 1
        print("count: {}".format(cnt))
        cap = cv2.VideoCapture(video_path)
        continue

    p.stdin.write(frame.tobytes())

    cv2.imshow('real_time', frame)

    key = cv2.waitKey(20)
    if key == 27:
        p.terminate()
        break
yuniversi
  • 67
  • 1
  • 6

1 Answers1

1

I have managed to create an example that works in Linux.

I was not able to simulate an RTSP camera, so I used MP4 file as input.

Creating the MP4 input file using FFmpeg CLI within Python (for testing):

sp.run(shlex.split(f'ffmpeg -y -f lavfi -i testsrc=size={width}x{height}:rate=25:duration=100 -vcodec libx264 -pix_fmt yuv420p {input_file_name}'))

The GStreamer command is:

p = sp.Popen(shlex.split(f'{gstreamer_exe} --quiet filesrc location={input_file_name} ! qtdemux ! video/x-h264 ! avdec_h264 ! videoconvert ! capsfilter caps="video/x-raw, format=BGR" ! filesink location={stdout_file_name}'), stdout=sp.PIPE)

  • --quiet is used because GStreamer prints messages to stdout.
  • filesrc location... is used for reading the MP4 input - replace it with RTSP pipeline.
  • videoconvert ! capsfilter caps="video/x-raw, format=BGR" converts the video format to raw BGR.
  • filesink location=/dev/stdout redirects the output to stdout (in Linux).

Code sample:

import cv2
import numpy as np
import subprocess as sp
import shlex
from sys import platform

width = 714
height = 420

input_file_name = 'input.mp4'  # For testing, use MP4 input file instead of RTSP input.

# Build MP4 synthetic input video file for testing:
sp.run(shlex.split(f'ffmpeg -y -f lavfi -i testsrc=size={width}x{height}:rate=25:duration=100 -vcodec libx264 -pix_fmt yuv420p {input_file_name}'))

if platform == "win32":
    # stdout_file_name = "con:"
    # gstreamer_exe = 'c:/gstreamer/1.0/msvc_x86_64/bin/gst-launch-1.0.exe'
    raise Exception('win32 system is not supported')
else:
    stdout_file_name = "/dev/stdout"
    gstreamer_exe = 'gst-launch-1.0'

# https://stackoverflow.com/questions/29794053/streaming-mp4-video-file-on-gstreamer
p = sp.Popen(shlex.split(f'{gstreamer_exe} --quiet filesrc location={input_file_name} ! qtdemux ! video/x-h264 ! avdec_h264 ! videoconvert ! capsfilter caps="video/x-raw, format=BGR" ! filesink location={stdout_file_name}'), stdout=sp.PIPE)

while True:
    raw_image = p.stdout.read(width * height * 3)

    if len(raw_image) < width*height*3:
        break

    image = np.frombuffer(raw_image, dtype='uint8').reshape((height, width, 3))
    cv2.imshow('image', image)
    key = cv2.waitKey(1)

p.stdout.close()
p.wait()
cv2.destroyAllWindows()

Update:

Based on your new question, I managed to create RTSP capturing example:

import cv2
import numpy as np
import subprocess as sp
import shlex

width = 240
height = 160

rtsp_url = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4'  # For testing, use public RTSP input.

gstreamer_exe = 'gst-launch-1.0'  # '/usr/bin/gst-launch-1.0'

# https://stackoverflow.com/questions/29794053/streaming-mp4-video-file-on-gstreamer
p = sp.Popen(shlex.split(f'{gstreamer_exe} --quiet rtspsrc location={rtsp_url} ! queue2 ! rtph264depay ! avdec_h264 ! videoconvert ! capsfilter caps="video/x-raw, format=BGR" ! fdsink'), stdout=sp.PIPE)

while True:
    raw_image = p.stdout.read(width * height * 3)

    if len(raw_image) < width*height*3:
        break

    image = np.frombuffer(raw_image, np.uint8).reshape((height, width, 3))
    cv2.imshow('image', image)
    key = cv2.waitKey(1)

p.stdout.close()
p.wait()
cv2.destroyAllWindows()

enter image description here

Rotem
  • 30,366
  • 4
  • 32
  • 65
  • Thanks for your answer! I can receive byte-stream by using `fdsink` instead of `filesink` and display the video successfully. But it can only succeed on the test video generated from the ffmpeg you provide. It fails on my mp4 video. Report errorr: pipeline doesn't want to preroll. In addition, after changing to RTSP stream, it reports error "GstUDPSrc:udpsrc1: Internal data stream error. GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1:streaming stopped, reason not-linked". – yuniversi Apr 19 '22 at 07:12
  • I re edit the streaming code in the above problem. Thanks! – yuniversi Apr 19 '22 at 07:20
  • Based on your new question, I managed to create RTSP capturing example (I updated my answer). – Rotem Apr 21 '22 at 10:33
  • Thank you very much for helping me solve the problem that has bothered me for a long time!!! – yuniversi Apr 21 '22 at 12:34