I have a scenario where I need to retrieve a video stream from an RTMP server, apply image processing (specifically, adding blur to frames), and then forward the processed stream to another RTMP server (in this case, Twitch).
Currently, I'm using ffmpeg in conjunction with cv2 to retrieve and process the stream. However, this approach introduces significant lag when applying the blur. I'm seeking an alternative method that can achieve the desired result more efficiently. I did attempt to solely rely on ffmpeg for the entire process, but I couldn't find a way to selectively process frames based on a given condition and subsequently transmit only those processed frames.
Is there a more efficient approach or alternative solution that can address this issue and enable real-time video stream processing with minimal lag?
Thanks in advance!
def forward_stream(server_url, stream_key, twitch_stream_key):
get_ffmpeg_command = [...]
send_ffmpeg_command [...]
# Start get FFmpeg process
read_process = subprocess.Popen(get_ffmpeg_command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
# Start send FFmpeg process
send_process = send_process = subprocess.Popen(send_ffmpeg_command, stdin=subprocess.PIPE, stderr=subprocess.DEVNULL)
# Open video capture
cap = cv2.VideoCapture(f'{server_url}')
while True:
# Read the frame
ret, frame = cap.read()
if ret:
# Apply machine learning algorithm
should_blur = machine_learning_algorithm(frame)
# Apply blur if necessary
if machine_learning_algorithm(frame):
frame = cv2.blur(frame, (25, 25))
# Write the frame to FFmpeg process
send_process.stdin.write(frame.tobytes())
else:
break
# Release resources
cap.release()
read_process.stdin.close()
read_process.wait()