1

Transcode jobs taking exponentially longer if done individually vs as part of a single config file/job.

First, use a config file to transcode 5 or more versions of a piece of content, note the total transcode time. Second, break out each version from the original config file into single transcode jobs and note the time each takes individually, add them all up then compare to the time it took to do the first job when done as a whole/single config file. I am seeing a 2-3X increase in transcode times.

Job Config { "config": { "elementaryStreams": [ { "key": "360p1-125kbps-h264", "videoStream": { "codec": "h264", "profile": "high", "preset": "slow", "widthPixels": 640, "heightPixels": 360, "frameRate": 29.97, "pixelFormat": "yuv420p", "bitrateBps": 125000, "rateControlMode": "vbr", "enableTwoPass": true, "gopDuration": "4s", "allowOpenGop": true, "entropyCoder": "cabac", "bFrameCount": 3, "bPyramid": true } }, { "key": "360p2-250kbps-h264", "videoStream": { "codec": "h264", "profile": "high", "preset": "slow", "widthPixels": 640, "heightPixels": 360, "frameRate": 29.97, "pixelFormat": "yuv420p", "bitrateBps": 250000, "rateControlMode": "vbr", "enableTwoPass": true, "gopDuration": "4s", "allowOpenGop": true, "entropyCoder": "cabac", "bFrameCount": 3, "bPyramid": true } }, { "key": "480p1-400kbps-h264", "videoStream": { "codec": "h264", "profile": "high", "preset": "slow", "widthPixels": 854, "heightPixels": 480, "frameRate": 29.97, "pixelFormat": "yuv420p", "bitrateBps": 400000, "rateControlMode": "vbr", "enableTwoPass": true, "gopDuration": "4s", "allowOpenGop": true, "entropyCoder": "cabac", "bFrameCount": 3, "bPyramid": true } }, { "key": "480p2-800kbps-h264", "videoStream": { "codec": "h264", "profile": "high", "preset": "slow", "widthPixels": 854, "heightPixels": 480, "frameRate": 29.97, "pixelFormat": "yuv420p", "bitrateBps": 800000, "rateControlMode": "vbr", "enableTwoPass": true, "gopDuration": "4s", "allowOpenGop": true, "entropyCoder": "cabac", "bFrameCount": 3, "bPyramid": true } }, { "key": "720p-1600kbps-h264", "videoStream": { "codec": "h264", "profile": "high", "preset": "slow", "widthPixels": 1280, "heightPixels": 720, "frameRate": 29.97, "pixelFormat": "yuv420p", "bitrateBps": 1600000, "rateControlMode": "vbr", "enableTwoPass": true, "gopDuration": "4s", "allowOpenGop": true, "entropyCoder": "cabac", "bFrameCount": 3, "bPyramid": true } }, { "key": "720p-2500kbps-h264", "videoStream": { "codec": "h264", "profile": "high", "preset": "slow", "widthPixels": 1280, "heightPixels": 720, "frameRate": 29.97, "pixelFormat": "yuv420p", "bitrateBps": 2500000, "rateControlMode": "vbr", "enableTwoPass": true, "gopDuration": "4s", "allowOpenGop": true, "entropyCoder": "cabac", "bFrameCount": 3, "bPyramid": true } }, { "key": "1080p-5500kbps-h264", "videoStream": { "codec": "h264", "profile": "high", "preset": "slow", "widthPixels": 1920, "heightPixels": 1080, "frameRate": 29.97, "pixelFormat": "yuv420p", "bitrateBps": 5500000, "rateControlMode": "vbr", "enableTwoPass": true, "gopDuration": "4s", "allowOpenGop": true, "entropyCoder": "cabac", "bFrameCount": 3, "bPyramid": true } } ], "muxStreams": [ { "key": "360p1-125kbps-h264", "container": "mp4", "elementaryStreams": [ "360p1-125kbps-h264" ] }, { "key": "360p2-250kbps-h264", "container": "mp4", "elementaryStreams": [ "360p2-250kbps-h264" ] }, { "key": "460p1-400kbps-h264", "container": "mp4", "elementaryStreams": [ "480p1-400kbps-h264" ] }, { "key": "460p2-800kbps-h264", "container": "mp4", "elementaryStreams": [ "480p2-800kbps-h264" ] }, { "key": "720p-1600kbps-h264", "container": "mp4", "elementaryStreams": [ "720p-1600kbps-h264" ] }, { "key": "720p-2500kbps-h264", "container": "mp4", "elementaryStreams": [ "720p-2500kbps-h264" ] }, { "key": "1080p-5500kbps-h264", "container": "mp4", "elementaryStreams": [ "1080p-5500kbps-h264" ] } ] } }

TJ Liu
  • 176
  • 5

1 Answers1

1

This behavior is working as intended mainly because there's a lot of duplicate work done with many jobs (so things like downloading the input file and other parts of our pipeline need to happen many times rather than once), and scheduling transcodes internally for many jobs will most likely never be as fast as well.

Transcoder API handles transcoding in parallel (we call it partition processing. Divide the long input file into partitions, and process each partition(2 min long video by default) in parallel) no matter your config is with 1 elementary stream or 10 elementary streams.

Back to your original question: 1 job with 10 elementary streams v.s. 10 jobs each with 1 elementary stream. They all handled in partition processing(parallel transcoding), however, 10 jobs each with 1 elementary stream do lots of duplicated tasks each job, thus requires longer time to finish in the end if you simply add the time together.

TJ Liu
  • 176
  • 5