0

(I'm a newbie when it comes to ffmpeg). I have an image source which saves files to a given folder in a rate of 30 fps. I want to wait for every (let's say) 30 frames chunk, encode it to h264 and stream it with RDP to some other app.

I thought about writing a python app which just waits for the images, and then executes an ffmpeg command. For that I wrote the following code:

main.py:

import os
import Helpers
import argparse
import IniParser
import subprocess
from functools import partial

from Queue import Queue
from threading import Semaphore, Thread


def Run(config):

    os.chdir(config.Workdir)
    iteration = 1

    q = Queue()
    Thread(target=RunProcesses, args=(q, config.AllowedParallelRuns)).start()

    while True:

        Helpers.FileCount(config.FramesPathPattern, config.ChunkSize * iteration)

        command = config.FfmpegCommand.format(startNumber = (iteration-1)*config.ChunkSize, vFrames=config.ChunkSize)

        runFunction = partial(subprocess.Popen, command)
        q.put(runFunction)

        iteration += 1

def RunProcesses(queue, semaphoreSize):

    semaphore = Semaphore(semaphoreSize)

    while True:

        runFunction = queue.get()

        Thread(target=HandleProcess, args=(runFunction, semaphore)).start()

def HandleProcess(runFunction, semaphore):

    semaphore.acquire()

    p = runFunction()
    p.wait()

    semaphore.release()

if __name__ == '__main__':

    argparser = argparse.ArgumentParser()
    argparser.add_argument("config", type=str, help="Path for the config file")
    args = argparser.parse_args()

    iniFilePath = args.config

    config = IniParser.Parse(iniFilePath)

    Run(config)

Helpers.py (not really relevant):

import os
import time
from glob import glob

def FileCount(pattern, count):

    count = int(count)

    lastCount = 0
    while True:

        currentCount = glob(pattern)

        if lastCount != currentCount:
            lastCount = currentCount

        if len(currentCount) >= count and all([CheckIfClosed(f) for f in currentCount]):

            break

        time.sleep(0.05)

def CheckIfClosed(filePath):

    try:
        os.rename(filePath, filePath)
        return True
    except:
        return False

I used the following config file:

Workdir = "C:\Developer\MyProjects\Streaming\OutputStream\PPM"
; Workdir is the directory of reference from which all paths are relative to.
; You may still use full paths if you wish.

FramesPathPattern = "F*.ppm"
; The path pattern (wildcards allowed) where the rendered images are stored to.
; We use this pattern to detect how many rendered images are available for streaming.
; When a chunk of frames is ready - we stream it (or store to disk).

ChunkSize = 30 ; Number of frames for bulk.
; ChunkSize sets the number of frames we need to wait for, in order to execute the ffmpeg command.
; If the folder already contains several chunks, it will first process the first chunk, then second, and so on...

AllowedParallelRuns = 1 ; Number of parallel allowed processes of ffmpeg.
; This sets how many parallel ffmpeg processes are allowed.
; If more than one chunk is available in the folder for processing, we will execute several ffmpeg processes in parallel.
; Only when on of the processes will finish, we will allow another process execution.

FfmpegCommand = "ffmpeg -re -r 30 -start_number {startNumber} -i F%08d.ppm -vframes {vFrames} -vf vflip -f rtp rtp://127.0.0.1:1234" ; Command to execute when a bulk is ready for streaming.
; Once a chunk is ready for processing, this is the command that will be executed (same as running it from the terminal).
; There is however a minor difference. Since every chunk starts with a different frame number, you can use the
; expression of "{startNumber}" which will automatically takes the value of the matching start frame number.
; You can also use "{vFrames}" as an expression for the ChunkSize which was set above in the "ChunkSize" entry.

Please note that if I set "AllowedParallelRuns = 2" then it allows multiple ffmpeg processes to run simultaneously.

I then tried to play it with ffplay and see if I'm doing it right. The first chunk was streamed fine. The following chunks weren't so great. I got a lot of [sdp @ 0000006de33c9180] RTP: dropping old packet received too late messages.

What should I do so I get the ffplay, to play it in the order of the incoming images? Is it right to run parallel ffmpeg processes? Is there a better solution to my problem?

Thank you!

Omer
  • 456
  • 1
  • 3
  • 19
  • Most likely since you restart the ffmpeg process each time, the RTP timestamp resets, but the client perceives this as a single stream and expects continuous PTS values. Not sure, but maybe you can provide an initial pts value from command line, but then you have to know the last as well. You could get around this if you actually used a ffmpeg python wrapper and streamer yourself. – Rudolfs Bundulis Jan 16 '19 at 10:48

1 Answers1

1

As I stated in the comment, since you rerun ffmpeg each time, the pts values are reset, but the client perceives this as a single continuous ffmpeg stream and thus expects increasing PTS values.

As I said you could use a ffmpeg python wrapper to control the streaming yourself, but yeah that is quite an amount of code. But, there is actually a dirty workaround.

So, apparently there is a -itsoffset parameter with which you can offset the input timestamps (see FFmpeg documentation). Since you know and control the rate, you could pass an increasing value with this parameter, so that each next stream is offset with the proper duration. E.g. if you stream 30 frames each time, and you know the fps is 30, the 30 frames create a time interval of one second. So on each call to ffmepg you would increase the -itsoffset value by one second, thus that should be added to the output PTS values. But I can't guarantee this works.

Since the idea about -itsoffset did not work, you could also try feeding the jpeg images via stdin to ffmpeg - see this link.

Rudolfs Bundulis
  • 11,636
  • 6
  • 33
  • 71
  • This sounds pretty much what I need. I will give that a try and will update here if it worked :) Thank you! – Omer Jan 16 '19 at 12:40
  • That didn't work. I tried adding a dynamic offset in my python script (as a function of the chunk index), but it didn't help. After reading about itsoffset in here: [link](https://superuser.com/questions/538031/what-is-difference-between-ss-and-itsoffset-in-ffmpeg), it seems it's not what I need. – Omer Jan 16 '19 at 13:55
  • Ok yeah, the link you found explains that it is different. Well, another easy option - maybe pipe the jpeg files to ffmpeg via stdin, again not sure if that will work, but just a quick idea. In that way you can poll the directory and write the files whenever they appear, I edited the answer. – Rudolfs Bundulis Jan 17 '19 at 08:35