62

My work recently involves programmatically making videos. In python, the typical workflow looks something like this:

import subprocess, Image, ImageDraw

for i in range(frames_per_second * video_duration_seconds):
    img = createFrame(i)
    img.save("%07d.png" % i)

subprocess.call(["ffmpeg","-y","-r",str(frames_per_second),"-i", "%07d.png","-vcodec","mpeg4", "-qscale","5", "-r", str(frames_per_second), "video.avi"])

This workflow creates an image for each frame in the video and saves it to disk. After all images have been saved, ffmpeg is called to construct a video from all of the images.

Saving the images to disk (not the creation of the images in memory) consumes the majority of the cycles here, and does not appear to be necessary. Is there some way to perform the same function, but without saving the images to disk? So, ffmpeg would be called and the images would be constructed and fed to ffmpeg immediately after being constructed.

Brandon
  • 975
  • 1
  • 8
  • 19
  • 13
    I don't know how you're creating the images, but ffmpeg accepts pipe inputs too: `ffmpeg -f image2pipe -c:v png -r 30000/1001 -i -`. – llogan Nov 08 '12 at 18:19
  • For simplicity, just assume that `createFrame(i)` returns a Python Image Library image object, which we store in `img`. I think your response is a step in the right direction, but half the challenge would be piping the constructed images to ffmpeg while in the python program. – Brandon Nov 08 '12 at 19:20
  • maybe queue and then pipe the images through a second thread? – unddoch Nov 08 '12 at 20:01
  • May be able to send your input into a named pipe and pass that to ffmpeg, as well, basically the same process... – rogerdpack Mar 04 '14 at 17:13

3 Answers3

76

Ok I got it working. thanks to LordNeckbeard suggestion to use image2pipe. I had to use jpg encoding instead of png because image2pipe with png doesn't work on my verision of ffmpeg. The first script is essentially the same as your question's code except I implemented a simple image creation that just creates images going from black to red. I also added some code to time the execution.

serial execution

import subprocess, Image

fps, duration = 24, 100
for i in range(fps * duration):
    im = Image.new("RGB", (300, 300), (i, 1, 1))
    im.save("%07d.jpg" % i)
subprocess.call(["ffmpeg","-y","-r",str(fps),"-i", "%07d.jpg","-vcodec","mpeg4", "-qscale","5", "-r", str(fps), "video.avi"])

parallel execution (with no images saved to disk)

import Image
from subprocess import Popen, PIPE

fps, duration = 24, 100
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'mjpeg', '-r', '24', '-i', '-', '-vcodec', 'mpeg4', '-qscale', '5', '-r', '24', 'video.avi'], stdin=PIPE)
for i in range(fps * duration):
    im = Image.new("RGB", (300, 300), (i, 1, 1))
    im.save(p.stdin, 'JPEG')
p.stdin.close()
p.wait()

the results are interesting, I ran each script 3 times to compare performance: serial:

12.9062321186
12.8965060711
12.9360799789

parallel:

8.67797684669
8.57139396667
8.38926696777

So it seems the parallel version is faster about 1.5 times faster.

Marwan Alsabbagh
  • 25,364
  • 9
  • 55
  • 65
  • 23
    For anyone who stumbles upon this in the future, replacing 'mjpeg' with 'png' and 'JPEG' with 'PNG' worked for me to use png. – Brandon Nov 10 '12 at 23:26
  • 2
    I managed to get the best quality using `-vcodec png` and `im.save(p.stdin, 'PNG')` though the filesize is x4 – bluesummers May 09 '17 at 12:02
  • Darn, the parrallel script worked perfectly until I updated to Python 3.6. Now I get `OSError: [WinError 6] The handle is invalid` on the `p = Popen(['ffmpeg',...` line. Any known work arounds? – zelusp Jun 21 '17 at 15:59
  • Found a solution [here](https://stackoverflow.com/questions/40108816/python-running-as-windows-service-oserror-winerror-6-the-handle-is-invalid). Basically, just add `stdout=PIPE` as an extra argument to `Popen` – zelusp Jun 21 '17 at 16:16
  • 3
    It should be streamed not parallel. – einstein Nov 16 '17 at 22:05
  • 2
    @einstein FFmpeg is encoding the video in parallel to the images being generated. – Jason C Nov 06 '22 at 19:08
  • 1
    @MarwanAlsabbagh Might try an uncompressed intermediate image format, could be burning a lot of cycles encoding as PNG or JPEG just to immediately decode it again. About to try experiments with it now, will post back if I remember to. – Jason C Nov 06 '22 at 19:09
  • 1
    Yeah OK two things about this: First, careful of buffering in the pipe; if there's a big buffer it can be a huge performance increase to flush the write end of the pipe after every image; that way ffmpeg will encode each frame immediately while your app does its processing in parallel. And second, "png" encoding is super slow (at least in Qt's C++ implementation), switching to "bmp" or another uncompressed format *blazes*. Probably would've knocked the 8 seconds in this example down to 1 or 2. – Jason C Nov 07 '22 at 22:56
9

imageio supports this directly. It uses FFMPEG and the Video Acceleration API, making it very fast:

import imageio

writer = imageio.get_writer('video.avi', fps=fps)
for i in range(frames_per_second * video_duration_seconds):
    img = createFrame(i)
    writer.append_data(img)
writer.close()

This requires the ffmpeg plugin, which can be installed using e.g. pip install imageio[ffmpeg].

Joren
  • 3,068
  • 25
  • 44
2

I'm Kind of late, But VidGear Python Library's WriteGear API automates the process of pipelining OpenCV frames into FFmpeg on any platform in real-time with Hardware Encoders support and at the same time provides same opencv-python syntax. Here's a basic python example:

# import libraries
from vidgear.gears import WriteGear
import cv2

output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer

stream = cv2.VideoCapture(0) #Open live webcam video stream on first index(i.e. 0) device

writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4' 

# infinite loop
while True:
    
    (grabbed, frame) = stream.read()
    # read frames

    # check if frame empty
    if not is grabbed:
        #if True break the infinite loop
        break
    

    # {do something with frame here}
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # write a modified frame to writer
    writer.write(gray) 
       
    # Show output window
    cv2.imshow("Output Frame", frame)

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out
        break

cv2.destroyAllWindows()
# close output window

stream.release()
# safely close video stream
writer.close()
# safely close writer

Source:https://abhitronix.github.io/vidgear/latest/gears/writegear/compression/usage/#using-compression-mode-with-opencv

You can check out VidGear Docs for more advanced applications and features.

abhiTronix
  • 1,248
  • 13
  • 17
  • 1
    Can you also write uncompressed video with this api, i.e. write images into a non-compressing container just to avoid saving each image individually (which is very slow). – matanster Dec 30 '21 at 20:28
  • 1
    @matanster yes, you can do anything that is possible with FFmpeg itself. You can use encoders like `r10k`, `r210` in `-vcodec` to achieve fully uncompressed AVI/MOV video or anything similarly with other specific encoders: https://superuser.com/a/347434 – abhiTronix Dec 31 '21 at 06:58
  • 1
    You can even stream directly with a URL: https://abhitronix.github.io/vidgear/latest/gears/writegear/compression/usage/#using-compression-mode-for-streaming-urls – abhiTronix Dec 31 '21 at 07:07