I want to use ffmpeg to read an RTSP stream, extract frames via a pipe
, do some processing on them with Python and afterwards combine the processed frames via another pipe
with the original audio. I'm using the subprocess
module in Python to execute the ffmpeg command as well as read and write the frames from and to ffmpeg.
Questions:
- Is it possible to pipe both stdin and stdout so as to extract the frames and then feed them back in after the processing?
- Do i also have to pipe the audio separately and feed it with the processed frames or can i simply copy the audio stream when mapping the output?