0

I'm using the ffmpeg-python library.

I have used the example code: https://github.com/kkroening/ffmpeg-python/tree/master/examples to asynchronously read in and process audio and video streams. The processing is custom and not something a built-in ffmpeg command can achieve (imagine something like tensorflow deep dreaming on both the audio and video). I then want to recombine the audio and video streams that I have created. Currently, the only way I can see to do it is to write both streams out to separate files (as is done e.g. in this answer: How to combine The video and audio files in ffmpeg-python), then use ffmpeg to combine them afterwards. This has the major disadvantage that the result cannot be streamed, i.e. the audio and video must be completely done processing before you can start playing the combined audio/video. Is there any way to combine them without going to files as an intermediate step?

Technically, the fact that the streams were initially read in from ffmpeg is irrelevant. You may as well assume that I'm in the following situation:

def audio_stream():
    for i in range(10):
        yield bytes(44100 * 2 * 4) # one second of audio 44.1k sample rate, 2 channel, s32le format

def video_stream():
    for i in range(10):
        yield bytes(60 * 1080 * 1920 * 3) # one second of video 60 fps 1920x1080 rgb24 format

# how to write both streams of bytes to file without writing each one separately to a file first?

I would like to use ffmpeg.concat, but this requires ffmpeg.inputs, which only accept filenames as inputs. Is there any other way? Here are the docs: https://kkroening.github.io/ffmpeg-python/.

nullUser
  • 1,601
  • 1
  • 12
  • 29

1 Answers1

-1

It seems you have to use pipes. Take a look at "Process video frame-by-frame using numpy" example. If you pass to input 'pipe:' instead of file name you can then use stdin.write to push raw data.

Nanev
  • 106
  • 7