I could use some help with construction of a Gstreamer pipeline. The intention is to capture video from a RaspiCam, then stream the video to another RPi, do image processing(object tracking) with OpenCV on the transmitting RPi in between. But I'm having troubles with the encoding and I don't really know much about witch I should use.
I have looked at this Adding opencv processing to gstreamer application and is trying to get gst-rpicamsrc to capture the cam for use in the pipeline. But I can't seem to figure out how to decode(?) the video so that I can use it in OpenCV. It's fine with videotestsrc but rpicamsrc has apparently other properties.
"rpicamsrc ! "
"h264parse ! "
"tee name=cam ! "
"video/x-h264, width=640, height=480, format=RGB ! "
"omxh264dec ! "
"videoconvert ! "
"appsink name=sink sync=true ! "
".cam rtph264pay ! "
"rtprtxqueue ! "
"udpsink host=127.0.0.1 sync=false port=5000"
I have also experimented with shorter pipelines, but only got it working with videotestsrc.
I do fear that the queue is adding a nasty delay to the image processing, and since I want to track objects it may not be good enough.
I have also tried to use VideoCapture cap("rpicamsrc ! appsink") and cap("v4l2src ! videodecode ! video/x-raw ! appsink) and so on.. And I have done my best to match the receiving side with decoders in the opposite direction, but my knowledge is just too limited..
Any recommendations of would be appreciated!