3

I have two .OGG files of similar size, FPS and duration. My goal is to combine them into a side-by-side presentation using FFMPEG. To this end I've tried the following cmd:

ffmpeg -i subject.ogg -vf "[in]pad=3*iw:3*ih[left];movie=clinician.ogg[right];[left] [right]overlay=100:0[out]" combined.ogg

Suffice to say that the resultant video is non-playable. During the combination process FFMPEG prints lots of errors that read like:

[Parsed_overlay_2 @ 0x1eb7d3e0] Buffer queue overflow, dropping

What is this telling me?

Note:

  • both source files are playable
  • I padded the 'output' to be rather large in an attempt to understand the params
  • the placement of the 2nd video at 100:0 is arbitrary. Once I get the cmd working I'll move it to a better location in the output.
  • both videos began life as .FLV recorded from web cameras. I converted them to .ogg as FFMPEG didn't want to combine two .FLV files. If there is a better route to this, please let me know.

So - what's wrong with my parameters and what am I doing to cause these FFMPEG errors?

EDIT:
ffmpeg -i clinician.ogg

Input #0, ogg, from 'clinician.ogg':
Duration: 00:05:20.98, start: 0.001000, bitrate: 2273 kb/s
Stream #0:0: Video: theora, yuv420p, 500x500 [SAR 1:1 DAR 1:1], 1k tbr, 1k tbn, 1k tbc
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Stream #0:1: Audio: vorbis, 8000 Hz, stereo, s16
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100

ffmpeg -i subject.ogg

Input #0, ogg, from 'subject.ogg':
Duration: 00:05:17.60, start: 0.001000, bitrate: 1341 kb/s
Stream #0:0: Video: theora, yuv420p, 300x300 [SAR 1:1 DAR 1:1], 83.33 tbr, 1k tbn, 1k tbc
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100
Stream #0:1: Audio: vorbis, 8000 Hz, stereo, s16
Metadata:
SERVER : Red5 Server 1.0.0 RC1 $Rev: 4193 $
CANSEEKTOEND : true
ENCODER : Lavf54.31.100

ethrbunny
  • 10,379
  • 9
  • 69
  • 131
  • 2
    Well don't expect to get an answer here, but it should be helpful if you add the output of `ffmpeg -i subject.ogg` and `ffmpeg -i clinician.ogg`. – rekire Nov 27 '12 at 18:01
  • The complete console output of your first command will be useful (you can exclude the repeat errors). Also we could then see if you're using an ancient build of ffmpeg or something more recent. – llogan Nov 27 '12 at 21:42

2 Answers2

2

Converting to x264 was a great suggestion. That seemed to turn the tide.

Here are some notes for posterity:

  • to convert flv to x264 and correct audio sync issues:

ffmpeg -y -i subject_s_2242_r_1658.flv -async 1 -ac 2 -strict -2 -acodec vorbis \
-c:v libx264 -preset slow -crf 22 subject.mkv

  • to merge two x264 files into a single side-by-side file and put the two mono audio tracks into stereo in the resultant file:

ffmpeg -y -i clinician.mkv -vf: "movie=subject.mkv[right];pad=iw*2:ih:0:0[left];[left][right]overlay=500:0" \
-filter_complex "amovie=clinician.mkv[l];amovie=subject.mkv[r];[l][r] amerge" final.mkv

I was unable to install AVISYNTH (running on CentOS 6.2) but it does look like a great solution.

ethrbunny
  • 10,379
  • 9
  • 69
  • 131
  • Yay, this filter worked when a fancier one gave many "Buffer queue overflow, dropping" errors: `ffmpeg -i in1.mov -i in1.mov -filter_complex "[0:v]setpts=PTS-STARTPTS, pad=iw*2:ih[bg]; [1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w; amerge,pan=stereo:c0 – Camille Goudeseune Feb 07 '14 at 21:55
0

It is probably easiest to do this using Avisynth.

Make the following input.avs file:

a = AviSource("first.avi")
b = AviSource("second.avi")
StackHorizontal(a,b)

Then run ffmpeg -i input.avs output.avi ... plus any other options you want.

EDIT: Another way to do it (not fast) is to dump the frames from both files to png and combine them with ImageMagick (for example montage) or similar image processing tools.

#!/bin/bash
ffmpeg -i first.avi first_%05d.png
ffmpeg -i second.avi second_%05d.png
for file in first_*.png ; do montage ${file} ${file/first/second} ${file/first/output} ; done
ffmpeg -i output_%05d.png output.avi

This actually lets you do a lot more image processing than just side-by-side, you can do arbitrary scale/overlay/background/etc. The problem is that the N-th frame from one file may not be at exactly the same time as the N-th frame from the other file, if they are variable frame rate, this is something that AviSynth handles perfectly for you. If the clips are constant frame rate that is not a problem.

Combining clips by making a new clip containing both like this (whether through avisynth or not) requires recompressing the video, and reduces video quality/increases file size.

I am not sure how to read ogg files into Avisynth, but there is probably a way. Check the FAQ on input formats.

Side comment: The choice of theora/ogg is strange. Better: H.264 in mp4 container.

Alex I
  • 19,689
  • 9
  • 86
  • 158