41

I am using serverless framework. My Lambda function connects to DynamoDB table for updating item in table. Read & Write capacity units of table are 5 & auto_scaling is disabled. AWS Lambda function has 128MB memory allocated.

I have used Jmeter for performance testing.I have sent 1000 requests concurrently and some response giving me perfect output while other giving internal server error(502 Bad Gateway). i have also analyzed cloudwatch for logs and only get Task Timeout error. can anyone suggest me why i am getting this error and how to solve it?

General Grievance
  • 4,555
  • 31
  • 31
  • 45
darshi kothari
  • 642
  • 1
  • 5
  • 12
  • 1
    How many connections are there in your pool? Maybe pool does not have enough connections and some tasks are timing out? – Aniket Thakur Dec 01 '17 at 14:56
  • @Mike Dinescu has a pretty thorough answer, but just in case, what runtime are you using? If in node, you may need to set the `context.callbackWaitsForEmptyEventLoop = false` http://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-context.html – Justin Kruse Dec 14 '17 at 17:07
  • Yes @JustinKruse i am using nodejs – darshi kothari Dec 15 '17 at 04:50
  • Were you able to resolve this? We had to set `context.callbackWaitsForEmptyEventLoop` so we didn't wait for things like network events or db connections to be cleaned up before returning, otherwise we were always hitting this 6 second timeout – Justin Kruse Dec 15 '17 at 23:15
  • 1
    Yes @JustinKruse i have resolved this by increasing read and write capacity units of DynamoDB. – darshi kothari Dec 18 '17 at 05:13
  • See this answer: https://stackoverflow.com/a/43578149/80434 > Lambda functions are limited to a maximum execution time of 5 minutes. The actual limit is configured when the Lambda function is created. The limit is in place because Lambda functions are meant to be small and quick rather than being large applications. Your error message says Task timed out after 15.00 seconds. This means that AWS intentionally stopped the task once it hit a run-time of 15 seconds. It has nothing to do with what the function was doing at the time, nor the file that was being processed. – Korayem Jan 09 '19 at 18:09

2 Answers2

75

The default timeout for AWS Lambda functions when using the Serverless framework is 6 seconds. Simply change that to a higher value as noted in the documentation:

functions:
  hello:
    ...
    timeout: 10 # optional, in seconds, default is 6
DV82XL
  • 5,350
  • 5
  • 30
  • 59
Dunedan
  • 7,848
  • 6
  • 42
  • 52
  • This only addresses (or sidesteps) part of the problem - answer by Mike Dinescu is more thorough: https://stackoverflow.com/a/47605576/1357094 – cellepo Dec 28 '19 at 20:38
  • @Dunedan, thanks! I was struggling with this. – Gru Aug 17 '21 at 05:04
12

Since you mentioned that your DynamoDB table is provisioned with only 5 WCU this means that only 5 writes are allowed per second.

DynamoDB does offer burst capacity allowing you to use 300 seconds worth of accumulated capacity (which at 5 WCU it is equivalent to 1500 total write requests) but as soon as those are exhausted it will start to throttle.

The DynamoDB client has automatic retries built in, with exponential backoff and it is smart enough to recognize throttling so it will slow down the retries to the point that a single write can easily take several seconds to complete successfully if it is being repeatedly throttled.

Your Lambda function is very likely timing out at 6 seconds because the function is waiting on retries to Dynamo.

So, when doing load testing make sure that your dependencies are all scaled appropriately. At 1000 requests per second you should make sure to scale the Read/Write capacity allocation for your DynamoDB table(s) and/or Index(s) accordingly.

Mike Dinescu
  • 54,171
  • 16
  • 118
  • 151