I'd like to use FFmpeg in a Unity project to concatenate multiple videos, selected or recorded by the user. The Unity plugin is providing FFmpeg 3.4.1 and it's possible to call all the normal oneline commands. But I'm not sure about piping outputs to next commands.
I managed it to concatenate videos from a list.txt in general but I have some issues with their target framerate and async audio streams.
This is what I do right now:
ffmpeg -y -f concat -safe 0 -i list.txt output.mp4 -hide_banner
In my list.txt there are two mp4 files with inpoint and outpoint data:
file video_01.mp4
inpoint 5
outpoint 20.2800006866455
file video_02.mp4
inpoint 10
outpoint 24.1599998474121
Those two videos have 50 and 25 fps, one is without audio (or muted) the other one has audio.
What is the best way to have a robust output from different sources with a defined video resolution, fps, codec and audio stream? Do I have to use -filter-complex?