2

I have a function that returns a frame as result. I wanted to know how to make a video out of a for-loop with this function without saving every frame and then creating the video.

What I have from now is something similar to:

import cv2
out = cv2.VideoWriter('video.mp4',cv2.VideoWriter_fourcc(*'DIVX'), 14.25,(500,258))
for frame in frames:
    img_result = MyImageTreatmentFunction(frame) # returns a numpy array image
    out.write(img_result)
out.release()

Then the video will be created as video.mp4 and I can access it on memory. I'm asking myself if there's a way to have this video in a variable that I can easily convert to bytes later. My purpose for that is to send the video via HTTP post.

I've looked on ffmpeg-python and opencv but I didn't find anything that applies to my case.

brenodacosta
  • 389
  • 2
  • 13
  • are your frames in rgb? – kesh Sep 05 '22 at 12:53
  • Yes! And I also have them in PIL doing `img_result_PIL = Image.fromarray(img_result).convert('RGB)` – brenodacosta Sep 05 '22 at 13:02
  • This can be done, but it's quite tricky. In my `ffmpegio` package, I've implemented a in-memory ffmpeg filtering class called [`SimpleFilterBase`](https://github.com/python-ffmpegio/python-ffmpegio/blob/df11a7d93f79a2d5123a1581f46d476636a31c69/src/ffmpegio/streams/SimpleStreams.py#L592). What you need is a modification of this class so that the FFmpeg output is written to `BytesIO`. Feel free to explore my repo to get an idea, and drop a discussion thread if you have particular questions (or post an issue if you would use the package with this enhancement). – kesh Sep 05 '22 at 15:49
  • 2
    One other idea (unverified). Look into `pyav` package, which runs a custom-built ffmpeg shared library (so no subprocess). You might have a better luck there. – kesh Sep 05 '22 at 18:04
  • @brenodacosta encoding MP4 in memory is problematic, because the output must be "seekable" (the writing position moves forward and backward). It may be possible using PyAV (I don't know). Will you consider other container format (like MKV)? – Rotem Sep 05 '22 at 21:46
  • the libraries `imageio` and `pyav` may be able to do this... I'm fairly sure that pyav can do it because it's basically a wrapper around the entire API of ffmpeg (i.e. more powerful than any "wrappers" that merely use an ffmpeg *subprocess*) – Christoph Rackwitz Sep 06 '22 at 19:10
  • 1
    @brenodacosta Can you give some feedback to my answer? I think your question may be relevant for many users. I tested my suggested solution, and I think it solves the problem. Have you tried executing my code sample? – Rotem Sep 13 '22 at 20:36
  • `pyav` works very fine for what I needed, and actually I didn't need to have an output .mp4 mandatory, I was thinking to have and dynamic output such as keeping the same format as the one I receive as bytes. Bat the proposed solution worked perfectly. – brenodacosta Sep 14 '22 at 07:52

1 Answers1

4

We may use PyAV for encoding "in memory file".

PyAV is a Pythonic binding for the FFmpeg libraries.
The interface is relatively low level, but it allows us to do things that are not possible using other FFmpeg bindings.

Here are the main stages for creating MP4 in memory using PyAV:

  • Create BytesIO "in memory file":

     output_memory_file = io.BytesIO()
    
  • Use PyAV to open "in memory file" as MP4 video output file:

     output = av.open(output_memory_file, 'w', format="mp4")
    
  • Add H.264 video stream to the MP4 container, and set codec parameters:

     stream = output.add_stream('h264', str(fps))
     stream.width = width
     stream.height = height
     stream.pix_fmt = 'yuv444p'
     stream.options = {'crf': '17'}
    
  • Iterate the OpenCV images, convert image to PyAV VideoFrame, encode, and "Mux":

     for i in range(n_frmaes):
         img = make_sample_image(i)  # Create OpenCV image for testing (resolution 192x108, pixel format BGR).
         frame = av.VideoFrame.from_ndarray(img, format='bgr24')
         packet = stream.encode(frame)
         output.mux(packet)
    
  • Flush the encoder and close the "in memory" file:

     packet = stream.encode(None)
     output.mux(packet)
     output.close()
    

The following code samples encode 100 synthetic images to "in memory" MP4 memory file.
Each synthetic image applies OpenCV image, with sequential blue frame number (used for testing).
At the end, the memory file is written to output.mp4 file for testing.

import numpy as np
import cv2
import av
import io

n_frmaes = 100  # Select number of frames (for testing).
width, height, fps = 192, 108, 23.976  # Select video resolution and framerate.

output_memory_file = io.BytesIO()  # Create BytesIO "in memory file".

output = av.open(output_memory_file, 'w', format="mp4")  # Open "in memory file" as MP4 video output
stream = output.add_stream('h264', str(fps))  # Add H.264 video stream to the MP4 container, with framerate = fps.
stream.width = width  # Set frame width
stream.height = height  # Set frame height
stream.pix_fmt = 'yuv444p'   # Select yuv444p pixel format (better quality than default yuv420p).
stream.options = {'crf': '17'}  # Select low crf for high quality (the price is larger file size).


def make_sample_image(i):
    """ Build synthetic "raw BGR" image for testing """
    p = width//60
    img = np.full((height, width, 3), 60, np.uint8)
    cv2.putText(img, str(i+1), (width//2-p*10*len(str(i+1)), height//2+p*10), cv2.FONT_HERSHEY_DUPLEX, p, (255, 30, 30), p*2)  # Blue number
    return img


# Iterate the created images, encode and write to MP4 memory file.
for i in range(n_frmaes):
    img = make_sample_image(i)  # Create OpenCV image for testing (resolution 192x108, pixel format BGR).
    frame = av.VideoFrame.from_ndarray(img, format='bgr24')  # Convert image from NumPy Array to frame.
    packet = stream.encode(frame)  # Encode video frame
    output.mux(packet)  # "Mux" the encoded frame (add the encoded frame to MP4 file).

# Flush the encoder
packet = stream.encode(None)
output.mux(packet)

output.close()

# Write BytesIO from RAM to file, for testing
with open("output.mp4", "wb") as f:
    f.write(output_memory_file.getbuffer())
Rotem
  • 30,366
  • 4
  • 32
  • 65
  • It worked really good, exactly as I what expecting, thanks! I was just wondering how you chose the parameters for creating the stream to the MP4 video. I looked into the repository of [pyav](https://github.com/PyAV-Org/PyAV) and I haven't found how exactly choose these parameters. For example, if I wanted an .AVI video output, I imagine I would need to create a different video stream – brenodacosta Sep 14 '22 at 07:44
  • AVI is a video container, and it doesn't require different codec. The parameters are not well documented in PyAV. I took most from FFmpeg documentation. CRF 17 is probably too low, the default is 24. – Rotem Sep 14 '22 at 12:44