0

I am running a robot that uses fmpeg to send straming video to letsrobot.tv You can see my bot on the website called patton II. I want to overlay a video HUD on the stream.

I have found a link explaining how to do this, however I do not know how to do it with a streaming video as input instead of a single image file.

This is the command that is currently being used to stream the video:

overlayCommand = '-vf dynoverlay=overlayfile=/home/pi/runmyrobot/images/hud.png:check_interval=500'
videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 %s http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, overlayCommand, server, videoPort)
audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -i hw:1 -ac 2 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort)
slhck
  • 36,575
  • 28
  • 148
  • 201
user3354787
  • 13
  • 1
  • 1
  • 7
  • I havent tried anything I do not know where to put that code or how to even set it up – user3354787 Aug 09 '17 at 08:08
  • I installed it as a package i wrote to an SD card it was all pre-installed, can I upload my current python script that is controlling the video somehwere on here? – user3354787 Aug 09 '17 at 08:26
  • overlayCommand = '-vf dynoverlay=overlayfile=/home/pi/runmyrobot/images/hud.png:check_interval=500' videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 %s http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, overlayCommand, server, videoPort) audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -i hw:1 -ac 2 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort) – user3354787 Aug 09 '17 at 08:32
  • sorry added what i think you need to the OP – user3354787 Aug 09 '17 at 08:33
  • Now, like I said, it would be great if you could not show the Python code that generates the commands, but the *actual* commands (i.e., do a `print(videoCommandLine)` and show that). Have you run them, and did you run into any errors? Or what is the specific problem you're facing? Also, is there any particular reason you're using MPEG-1 video (it's incredibly old and inefficient.)? Finally, I can't find a `dynoverlay` filter in official FFmpeg. Where did you get this from? Are you using the proposed patch from Sept. 2016? (You may delete obsolete comments.) – slhck Aug 09 '17 at 08:46
  • i dont know man,,,,like i said made the robot, installed the image file provided....that is literally all i did, turn it on and it works...the best i can do is let you remote into my Raspberry pi but i doubt you want to do that....idk maybe i can figure it out if i play with it – user3354787 Aug 09 '17 at 08:50
  • Well, it's hard to answer a question when the problem isn't even clear. You said you turned it on, installed some (what?) image file, and “it works”? As far as I can see, the command is already overlaying something using a non-official FFmpeg patch. So what is the specific issue you need to solve? – slhck Aug 09 '17 at 08:52
  • If you go to letsrobot.tv and look at the robot called patton, there is a static png overlaying the video...I wanted to replace the static png image with a video overlay. I can turn the robot on so you can see – user3354787 Aug 09 '17 at 08:54
  • So you just need to replace the overlay filter you currently have. Use two inputs (the existing one and your new overlay video), and use the `-filter_complex` as shown in the other question. – slhck Aug 09 '17 at 08:58
  • what do you mean the existing one? – user3354787 Aug 09 '17 at 09:00
  • Well, there is already one input video from your webcam (or whatever it is that records the video), which is `-i /dev/video%s`. You need to add another, whatever it is that generates the HUD, then use `-filter_complex` with `overlay` as shown in the other question and overlay that second video on top of the first. – slhck Aug 09 '17 at 09:02
  • like this? : ffmpeg -i /dev/video%s -i overlay.mov \ -filter_complex "[1:v]setpts=PTS-10/TB[a]; \ [0:v][a]overlay=enable=gte(t\,5):shortest=1[out]" \ -map [out] -map 0:a \ -c:v libx264 -crf 18 -pix_fmt yuv420p \ -c:a copy \ output.mov – user3354787 Aug 09 '17 at 09:07
  • Obviously, you have to adapt it to your use case. You don't need the `setpts` filter if you do not want to shift timestamps or offset the HUD. You also don't need the `enable` filter if you want to show the overlay at all times. You also have no audio stream, so you don't need `-map 0:a`. And `-c:v libx264` uses a different codec entirely. Please try to understand what each of the options means before you copypaste them. – slhck Aug 09 '17 at 09:13

1 Answers1

2

You already have one input, which is the webcam video:

-f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s

You want to overlay another video, so you have to add a second input, which is your HUD stream. I'm assuming that you already have a stream that's being generated on the fly:

-i /path/to/hud/stream

Then, add a complex filter that overlays one over the other:

-filter_complex "[0:v][1:v]overlay[out]"

After the filter, add a -map "[out]" option to tell ffmpeg to use the generated video as output, and add your remaining options as usual. So, in sum:

/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s \
-i /path/to/hud/stream \
-filter_complex "[0:v][1:v]overlay[out]" -map "[out]" \
-f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 \
-muxdelay 0.001 %s http://%s:%s/hello/640/480/

Obviously, without knowing more, this is the most generic advice I can give you.

Some general tips:

  • Make sure that the HUD stream is the same resolution as the webcam video, where the elements are placed where you want them. Or use the overlay filter's x and y options to move the HUD.
  • Your HUD stream should have a transparency layer. Not all codecs and container formats support that.
  • You're using -codec:v mpeg1video, which is MPEG-1 video. It's quite resource-efficient but otherwise low in quality. You may want to choose a better codec, but it depends on your device capabilities (e.g., at least MPEG-2 with mpeg2, or MPEG-4 Part 10 with mpeg4, or H.264 with libx264).
slhck
  • 36,575
  • 28
  • 148
  • 201
  • The video wont be a stream it will be an actual video...the stream is from the camera, the overlay is from a video file and also.......does the video overlay also have to be mpeg1 or can it be somthing else? – user3354787 Aug 09 '17 at 09:14
  • It can be MPEG-1 but it shouldn't. Like I said, this is old and inefficient. Use a modern codec that supports transparency, like VP9 through `libvpx`, or use a series of PNG images, or Apple ProRes 4444, Apple Animation, … – slhck Aug 09 '17 at 09:22