0

So I followed this tutorial to set everything up, and changed the function a bit to compress video, but no matter what I try, on larger videos (basically anything over 50-100MB), the output file will always be cut short, and depending on the encoding settings I'm using, will be cut by different amounts. I tried using the solution found here, adding a -nostdin flag to my ffmpeg command, but that also didn't seem to fix the issue.
Another odd thing, is no matter what I try, if I remove the '-f mpegts' flag, the output video will be 0B.
My Lambda function is set up with 3008MB of Memory (submitted a ticket to get my limit upped so I can use the full 10240MB available), and 2048MB of Ephemeral storage (I honestly am not sure if I need anything more than the minimum 512, but I upped it to try and fix the issue). When I check my cloudwatch logs, on really large files, it will occasionally time out, but other than that, I will get no error messages, just the standard start, end, and billable time messages.

This is the code for my lambda function.

import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "rw-video-out"
SIGNED_URL_TIMEOUT = 600

def lambda_handler(event, context):

    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + "-comp.mp4"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)

    ffmpeg_cmd = f"/opt/bin/ffmpeg -nostdin -i {s3_source_signed_url} -f mpegts libx264 -preset fast -crf 28 -c:a copy - "
    command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    s3 = boto3.resource('s3')
    s3.Object(s3_source_bucket,s3_source_key).delete()

    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }

This is the code from the solution I mentioned, but when I try using this code, I get output.mp4 not found errors

def lambda_handler(event, context):
    print(event)
    os.chdir('/tmp')
    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + ".mp4"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)
    print(s3_source_signed_url)
    s3_client.download_file(s3_source_bucket,s3_source_key,s3_source_key)
    # ffmpeg_cmd = "/opt/bin/ffmpeg -framerate 25 -i \"" + s3_source_signed_url + "\" output.mp4 "
    ffmpeg_cmd = f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4 "
    # command1 = shlex.split(ffmpeg_cmd)
    # print(command1)
    os.system(ffmpeg_cmd)
    # os.system('ls')
    # p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    file = 'output.mp4'
    resp = s3_client.put_object(Body=open(file,"rb"), Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    # resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    s3 = boto3.resource('s3')
    s3.Object(s3_source_bucket,s3_source_key).delete()
    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }

Any help would be greatly appreciated.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470

1 Answers1

0

Okay, so it looks like the solution I posted earlier actually does work, my issue was I changed the ffmpeg command to use s3_source_signed_url, instead of s3_source_key, changing it back to s3_source_key and adding the -nostdin flag appears to have fixed my issues.