0

I would like to use the redirection operator to bring the stream from ffmpeg to cv2 so that I can recognize or mark the faces on the stream and redirect this stream again so that it runs under another stream.

One withoutfacedetect and One withfacedetect.

raspivid -w 1920 -h 1080 -fps 30 -o - -t 0 -vf -hf -b 6000000 | ffmpeg -f h264 -i - -vcodec copy -g 50 -strict experimental -f tee -map 0:v "[f=flv]rtmp://xx.xx.xx.xx/live/withoutfacedetect |[f=h264]pipe:1" > test.mp4

I then read up on CV2 and came across the article.

https://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Image_Object_Detection_Face_Detection_Haar_Cascade_Classifiers.php

I then ran the script with my picture and was very amazed that there was a square around my face.

But now back to business. What is the best way to do this?

thanks to @Mark Setchell, forgot to mention that I'm using a Raspberry Pi 4.

Georg
  • 3
  • 3
  • 1
    Your question raises several issues. 1) You are using a Raspberry Pi aren't you? You don't appear to mention that. 2) Are you wanting to stream this across your network, or is it all local to your Raspi? It makes a difference to the bandwidth/compression. 3) Why do you have `-acodec XX -ab XX`? Surely there is no audio and you want `-an`? 4) Is your question actually *"How do I stream `raspivid` into **OpenCV**"* ? – Mark Setchell Apr 29 '21 at 00:54
  • Thanks, I changed the post. I took out the audio tags. Forgot to take it out of a template there. I want to stream this over the internet. – Georg Apr 29 '21 at 07:33
  • So you want to read the Raspberry Pi camera with `raspivid`, process it with Python? (C++?) on the Raspberry Pi and then stream it across your network, or stream it across the network and process it somewhere else? and at the same time also send the unprocessed stream to a disk file on the Raspberry Pi? – Mark Setchell Apr 29 '21 at 08:26
  • Yes, I would like to read out the camera using raspivid, transfer this stream to ffmpeg, then transfer the stream from ffmpeg to OpenCV and then again as a second stream. – Georg Apr 29 '21 at 08:33

1 Answers1

0

I'm still not 100% certain what you are really trying to do, and have more thoughts than I can express in a comment. I have not tried all of what I think you are trying to do, and I may be over-thinking it, but if I put down my thought-train, maybe others will add in some helpful thoughts/corrections...

Ok, the video stream comes from the camera into the Raspberry Pi initially as RGB or YUV. It seems silly to use ffmpeg to encode that to h264, to pass it to OpenCV on its stdin when AFAIK, OpenCV cannot easily decode it back into BGR or anything it naturally likes to do face detection with.

So, I think I would alter the parameters to raspivid so that it generates RGB data-frames, and remove all the h264 bitrate stuff i.e.

raspivid -rf rgb  -w 1920 -h 1080 -fps 30 -o - | ffmpeg ...

Now we have RGB coming into ffmpeg, so you need to use tee and map similar to what you have already and send RGB to OpenCV on its stdin and h264-encode the second stream to rtmp as you already have.

Then in OpenCV, you just need to do a read() from stdin of 1920x1080x3 bytes to get each frame. The frame will be in RGB, but you can use:

cv2.cvtColor(cv2.COLOR_RGB2BGR)

to re-order the channels to BGR as OpenCV requires.

When you read the data from stdin you need to do:

frame = sys.stdin.buffer.read(1920*1080*3)

rather than:

frame = sys.stdin.read(1920*1080*3)

which mangles binary data such as images.

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432