0

as far as I know AWS lambda will consider as fail if function taks over 3 seconds

def copy_to_redshift(cur, key):
    sql = '''
    copy <table_name> 
    from '<s3 bucket url>' 
    credentials 'aws_access_key_id=<..>;aws_secret_access_key=<..>' 
    json 'auto'
    ''' % (key)

    cur.execute(sql)

I use this code to load data from s3 to redshift. it is take over 3 seconds and I got this log from lambda

Task timed out after 3.00 seconds

How can I reduce this performance?

my s3 file size is 7M and have 50000 rows.

lucky my redshift could load successfully even lambda finish as fail and I think function doesn't need to wait until copy sql.

is it possible to just give sql and terminate function?

user3773632
  • 445
  • 6
  • 20
  • Possible duplicate of [AWS Lambda Task timed out after 6.00 seconds](https://stackoverflow.com/questions/47594168/aws-lambda-task-timed-out-after-6-00-seconds) – Dunedan Jun 29 '18 at 05:47

1 Answers1

2

AWS Lambda has a maximum timeout of 5 minutes. If your processing is not getting over within 3 seconds you can try to gradually increase that and see what the optimum timeout for your Lambda can be (obviously upto a max of 5 minutes)

Regarding "How can I reduce this performance", you can try increasing the memory footprint for your Lambda. This single memory control knob does 2 things

  1. Increase the memory (RAM) available for your Lambda
  2. Increase the CPU power available to your Lambda proportional to the memory

Combination of above 2 should hopefully resolve the problem

And yes if the Lambda issues the Redshift copy command before it times out then that Redshift operation will succeed irrespective of Lambda timing out

Arafat Nalkhande
  • 11,078
  • 9
  • 39
  • 63