1

I would like to know how I can transfer PIL images to FFMPY to save it as video, or gif, since the PIL library's quantization method has strong quality losses in certain cases. I first do some modifications with PIL, and then want to export and save the result.

I did not find any information on the topic online, beside one post with PIL to FFMPEG: Pipe PIL images to ffmpeg stdin - Python How could I implement something similar in FFMPY?

If I have for example this setup to begin with:

import ffmpy
import PIL
from PIL import Image as Img

images = [Img.open('frame 1.png'),Img.open('frame 2.png')]#How do I convert them to FFMPEG?

#Here I modify the images using PIL

#Save with FFMPEG:
ff = ffmpy.FFmpeg(
    inputs={images ?: None},#How do I insert PIL images here?
    outputs={'output.gif': None},
    executable='ffmpeg\\bin\\ffmpeg.exe')
ff.run()

How would I proceed to convert and save the images as a video using FFMPY? Is it possible by adding some steps inbetween? I wouldn't want to have to save all PIL images first as images, and then import them and save them with FFMPY a second time, since that would be very time consuming with larger files.

1 Answers1

1

According to ffmpy documentation, it seems like the most relevant option is using using-pipe-protocol.

  • Instead of using PIL for reading the images, we may read the PNG images as binary data into BytesIO (reading all images to in-memory file-like object):

     # List of input image files (assume all images are in the same resolution, and the same "pixel format").
     images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']
    
     # Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
     images_in_memory = io.BytesIO()
     for png_file_name in images:
         with open(png_file_name, 'rb') as f:
             images_in_memory.write(f.read())
    
  • Run ffmpy.FFmpeg using pipe protocol.
    Pass images_in_memory.getbuffer() as input_data argument to ff.run:

     ff = ffmpy.FFmpeg(
         inputs={'pipe:0': '-y -f image2pipe -r 1'},
         outputs={'output.gif': None},
         executable='\\ffmpeg\\bin\\ffmpeg.exe')
    
     # Write the entire buffer of encoded PNG images to the "pipe".
     ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)
    

The above solution seems a bit awkward, but it's the best solution I could find using ffmpy.
There are other FFmpeg to Python binding like ffmpeg-python, that supports writing the images one by one in a loop.
Using ffmpy, we have to read all the images into memory from advance.

The above solution keeps the PNG images in their encoded (binary form).
Instead of decoding the images with PIL (for example), FFmpeg is going to decode the PNG images.
Letting FFmpeg decode the images is more efficient, and saves memory.
The limitation is that all the images must have the same resolution.
The images also must have the same "pixel format" (all RGB or all RGBA but not a mix).
In case images have different resolution or pixels format, we have to decode the images (and maybe resize the images) using Python, and write images as "raw video".


For testing we may create PNG images using FFmpeg CLI:

ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png".


Complete code sample:

import ffmpy
import io
import subprocess

#Building sample images using FFmpeg CLI for testing: ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png"

# List of input image files (assume all images are in the same resolution, and the same "pixel format").
images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']

# Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
images_in_memory = io.BytesIO()
for png_file_name in images:
    with open(png_file_name, 'rb') as f:
        images_in_memory.write(f.read())

# Use pipe protocol: https://ffmpy.readthedocs.io/en/latest/examples.html#using-pipe-protocol
ff = ffmpy.FFmpeg(
    inputs={'pipe:0': '-y -f image2pipe -r 1'},
    outputs={'output.gif': None},
    executable='\\ffmpeg\\bin\\ffmpeg.exe')  # Note: I have ffmpeg.exe is in C:\ffmpeg\bin folder

ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)

Sample output output.gif:
enter image description here


Update:

Same solution using images from Pillow:

The above solution also works if we save the images from Pillow to BytesIO in PNG format.

Example:

import ffmpy
import io
import subprocess
from PIL import Image as Img

#Building sample images using FFmpeg CLI for testing: ffmpeg -f lavfi -i testsrc=size=192x108:rate=1:duration=8 "frame %d.png"

# List of input image files (assume all images are in the same resolution, and the same "pixel format").
images = ['frame 1.png', 'frame 2.png', 'frame 3.png', 'frame 4.png', 'frame 5.png', 'frame 6.png', 'frame 7.png', 'frame 8.png']

# Read PNG images from files, and write to BytesIO object in memory (read images as binary data without decoding).
images_in_memory = io.BytesIO()
for png_file_name in images:
    img = Img.open(png_file_name)
    # Modify the images using PIL...
    img.save(images_in_memory, format="png")

# Use pipe protocol: https://ffmpy.readthedocs.io/en/latest/examples.html#using-pipe-protocol
ff = ffmpy.FFmpeg(
    inputs={'pipe:0': '-y -f image2pipe -r 1'},
    outputs={'output.gif': None},
    executable='\\ffmpeg\\bin\\ffmpeg.exe')

ff.run(input_data=images_in_memory.getbuffer(), stdout=subprocess.PIPE)

Encoding the images as PNG in memory is not most efficient in terms of execution time, but it saves memory space.

Rotem
  • 30,366
  • 4
  • 32
  • 65
  • Hello, thank you for the explanation and sample code. However I modify the images using PIL first, and then save the edited results, which is why I need to convert the PIL images to a FFMPEG compatible format, and cannot load them directly with FFMPEG. – Tricky Devil May 22 '23 at 20:35
  • Maybe you would know how to apply the pipeline process from this post in FFMPY? https://stackoverflow.com/questions/43650860/pipe-pil-images-to-ffmpeg-stdin-python – Tricky Devil May 22 '23 at 20:37
  • 1
    I updated my answer. Why do you want to use FFMPY? If there is a way to apply the pipeline process, it is undocumented. – Rotem May 22 '23 at 20:50
  • FFMPY is more intuitive than FFMPEG for me, and easier to distribute and use in the final application. Thank you, the updated version works for me! I marked it as the correct answer for this post. But it is a bit slow. There's no more efficient way to convert PIL images to FFMPEG? And when I want to export to MP4 it doesn't work. When I change it to this: outputs={'output.mp4': None} The file that gets created cannot be played back. Why could that be? – Tricky Devil May 23 '23 at 21:54
  • 1
    I can't see a reason why `outputs={'output.mp4': None}` is not working. You may try selecting codec and pixel format: `outputs={'output.mp4': '-vcodec libx264 -pix_fmt yuv420p'}`. Maybe your FFmpeg version is built from sources, or an LGPL version? For making the execution a bit faster, we may convert each PIL images to NumPy array, save the NumPy array to `BytesIO`, and use `inputs={'pipe:0': '-y -f rawvideo -s 192x108 -pix_fmt rgb24'}`. – Rotem May 23 '23 at 22:07
  • Thank you, `outputs={'output.mp4': '-vcodec libx264 -pix_fmt yuv420p'}` works for MP4 format! I downloaded the ffmpeg.exe from here (below the source code link): https://ffmpeg.org/download.html#build-windows How do I save a Numpy array to the BytesIO stream? – Tricky Devil May 24 '23 at 13:52
  • 1
    Save image as bytes in raw RGB format: `images_in_memory.write(np.asarray(img).tobytes())` – Rotem May 24 '23 at 14:32