2

I am building an application where user can upload video and others can watch them later. I am aiming for HLS streaming of the video on the client side, for which the video format should be .m3u8. I am using node fluent-FFmpeg module to do the processing, however, I have a huge doubt, that, how to ensure that all the .ts files (chunks) are also stored back in s3 bucket along with the m3u8 file after ffmpeg processed the mp4 file?

Because the ffmpeg command only takes the location of the m3u8 file? How handle it when I want the input and output location to be S3?

Any help will be greatly appreciated.

I am following the answer from this question Ffmpeg creating m3u8 from mp4, video file size , which is working absolutely fine in my local machine, how to achieve the same for s3?

dexter2019
  • 209
  • 4
  • 10

1 Answers1

3

Do the conversion (mp4-in, m3u8-out) recognizing that the local FS snapshot onComplete is NOT what u want..

You dont want because the as-output .m3u8 has container-like references ( EXTINF: segments ) using LOCAL FS and relative paths to each child .ts )

like this:

 #EXTINF:4.0
   ./segment_0.ts     <<  relative path from .m3u8 output

you will need a postprocess that relocates , rerefs everything to a remote:

  1. syncs all the local .ts files in /tmp to your CDN on S3

  2. saves/maps new .ts-S3 URI for each .ts file to the old , local fsPath in /tmp

  3. updates the m3u8 file output by fluent to reference the CDN copy of each .ts segment that contains CDN / URI to .ts

#EXTINF:4.0
https://${s3Domain}/${s3Bucket}/180_250000/hls/segment_0.ts
#EXTINF:4.0
https://${s3Domain}/${s3Bucket}/180_250000/hls/segment_1.ts
#EXTINF:4.0
https://${s3Domain}/${s3Bucket}/180_250000/hls/segment_2.ts
#EXTINF:4.0
  1. syncs the updated .m3u8 to the CDN

When done, all the references from the local FS snapshot when fluent process completes are changed so they all work from a new cloud location.

That is the brute-force workaround

OR

you use a service like 'cloudfront' that does the dirty work for you.

Robert Rowntree
  • 6,230
  • 2
  • 24
  • 43
  • hello Robert, thanks a lot for the detailed explanation. I am already using Cloudfront to sign S3 urls for CDN delivery purpose. Are you asking to use the combo of Cloudfront and AWS Elemental Media Convert? – dexter2019 May 20 '19 at 05:25
  • sorry if it was a misdirection. I wrote a brute force implementation in node ( Heroku cpu instances using S3 buckets ) . At that time i thought that there existed a much easier way to generate output streams in the cpu instance and to pipe those streams in such a way that the destination would be a bucket containing all the referenced ".ts" segments AND WHERE the correlated EXTINF paths in the container M3U8 were magically recast to point to the bucket URI's. guess not.. there shoud be "an app for that" – Robert Rowntree May 20 '19 at 08:11
  • something like answer here :: https://stackoverflow.com/questions/46672066/upload-ffmpeg-output-directly-to-amazon-s3 -- but i guess that it does not work that easily and all the .ts paths have to be recast . good luck – Robert Rowntree May 20 '19 at 08:18