2

I'm working on Gazebo Sim which uses 'Gstreamer Plugin' to stream Camera Video via UDP. Simulation is started on Ubuntu 18.04.

There is some resources to understand backend of this structer. Gazebo Simulation PX4 Guide

And they mentions how to create pipeline:

The video from Gazebo should then display in QGroundControl just as it would from a real camera.

It is also possible to view the video using the Gstreamer Pipeline. Simply enter the following terminal command:

gst-launch-1.0  -v udpsrc port=5600 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' \
! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink fps-update-interval=1000 sync=false

And it works well on terminal. I read these questions:

using gstreamer with python opencv to capture live stream?

Write in Gstreamer pipeline from opencv in python

So then, I tried to implement this pipeline into opencv by using following lines:

video = cv2.VideoCapture('udpsrc port=5600 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink fps-update-interval=1000 sync=false', cv2.CAP_GSTREAMER)
    #video.set(cv2.CAP_PROP_BUFFERSIZE,3)
    # Exit if video not opened.
    if not video.isOpened():
        print("Could not open video")
        sys.exit()

    # Read first frame.
    ok, frame = video.read()
    if not ok:
        print('Cannot read video file')
        sys.exit()

But it's only giving error:

Could not open video

And I tried different variations of this pipeline in opencv but None of them helped me.

Bozkurthan
  • 121
  • 1
  • 12
  • I found useful code which is directly run on my case: https://gist.github.com/patrickelectric/443645bb0fd6e71b34c504d20d475d5a But I just want to use single line.. – Bozkurthan Nov 11 '19 at 05:30

4 Answers4

2

Currently, your pipeline provides no way for OpenCV to extract decoded video frames from the pipeline. This is because all frames go to the autovideosink element at the end which takes care of displaying the frames on-screen. Instead, you should use the appsink element, which is made specifically to allow applications to receive video frames from the pipeline.

video = cv2.VideoCapture(
    'udpsrc port=5600 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264"'
    ' ! rtph264depay'
    ' ! avdec_h264'
    ' ! videoconvert'
    ' ! appsink', cv2.CAP_GSTREAMER)
Velovix
  • 527
  • 1
  • 6
  • 19
  • That's the good point but still getting "Could not open video" when tried. Perhaps there is another issue on the pipeline. The link that below question in the comment works well but I think it's workaround for the issue. Maybe you can check the code for useful pipeline. – Bozkurthan Nov 15 '19 at 07:17
  • This pipeline works for me using your code and an RTP server set up with this command: "gst-launch-1.0 videotestsrc ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5600". You could try setting the GST_DEBUG environment variable to "3" and looking at the logs for error messages. – Velovix Nov 15 '19 at 18:16
  • Maybe the pipeline that you used can be run without error. I'm using embedded gstream plugin so I can't be able to change debug mode or another function. [Gstream Source](https://dev.px4.io/v1.9.0/en/simulation/gazebo.html) – Bozkurthan Nov 19 '19 at 05:54
2

Following code works without errors:

# Read video
video = cv2.VideoCapture("udpsrc port=5600 ! application/x-rtp,payload=96,encoding-name=H264 ! rtpjitterbuffer mode=1 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! appsink", cv2.CAP_GSTREAMER);

I think decode option wasn't true.

cottontail
  • 10,268
  • 18
  • 50
  • 51
Bozkurthan
  • 121
  • 1
  • 12
1

Check if your OpenCV has Gstreamer support by

print(cv2.getBuildInformation())

There should be

Video I/O:
    FFMPEG:                      YES
      avcodec:                   YES (58.54.100)
      avformat:                  YES (58.29.100)
      avutil:                    YES (56.31.100)
      swscale:                   YES (5.5.100)
      avresample:                YES (4.0.0)
    GStreamer:                   YES (1.16.2)
    v4l/v4l2:                    YES (linux/videodev2.h)
    Intel Media SDK:             YES (/mnt/nfs/msdk/lin-18.4.1/lib64/libmfx.so)
0

I tried this code but did not work. I also tried a different pipeline. Below are my terminal pipelines:

Sender:

gst-launch-1.0 -v realsensesrc serial=$rs_serial timestamp-mode=clock_all enable-color=true ! rgbddemux name=demux demux.src_depth ! queue ! colorizer near-cut=300 far-cut=3000 ! rtpvrawpay ! udpsink host=192.168.100.80 port=9001

Receiver:

gst-launch-1.0 udpsrc port=9001 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGB, depth=(string)8, width=(string)1280, height=(string)720, payload=(int)96" ! rtpvrawdepay ! videoconvert ! queue ! fpsdisplaysink sync=false

I can see the video using the above terminal pipeline in the receiver. BUT when I converted it to python code, the output is:

Could not open Video

gst_receiver.py

import cv2
import sys

video = cv2.VideoCapture(
    'udpsrc port=9001 ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW,'
    'sampling=(string)RGB, depth=(string)8, width=(string)1280, height=(string)720, payload=(int)96'
    ' ! rtpvrawdepay ! decodebin ! videoconvert ! queue ! appsink', cv2.CAP_GSTREAMER)

# video.set(cv2.CAP_PROP_BUFFERSIZE,3)
# Exit if video not opened.
if not video.isOpened():
    print("Could not open Video")
    sys.exit()

# Read first frame.
ok, frame = video.read()
if not ok:
    print('Cannot read Video file')
    sys.exit()

System:

Sender-PC = Ubuntu 18.04
Receiver-PC = Windows 10
Python = 3.7.9
OpenCV = 4.5.5 
codeCat
  • 51
  • 2
  • 12