6

If my Lambda function written in Python takes 1.8 seconds to initialize (during a cold start) and 400 ms to execute, am I charged for the 400 ms execution time or the entire 2.2 seconds of initialization + execution time?

From X-Ray, I see:

AWS X-Ray trace

From CloudWatch logs, I see:

Duration: 404.42 ms Billed Duration: 500 ms Memory Size: 448 MB Max Memory Used: 113 MB

What I understand from this is that I was billed for 500ms of execution time, so does that mean code initialization (e.g. importing stuff) is free?

Vinayak
  • 1,103
  • 3
  • 18
  • 40

5 Answers5

16

So I decided to try and figure it out myself with a little experiment. I created a Lambda function using Python 2.7 with 128 MB of RAM, timeout of 15 seconds and active tracing enabled. I modified the sample code to add a 10 second sleep right after the import statement:

print "starting import"
import json
from time import sleep
sleep(10)
print "calling handler"

def lambda_handler(event, context):
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

Since the Lambda started cold, I saw this in the X-ray output: AWS X-ray output - cold start

And I saw this in CloudWatch logs:

22:06:47 starting import
22:06:57 calling handler
22:06:58 START RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Version: $LATEST
22:06:58 starting import
22:07:08 calling handler
22:07:08 END RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
22:07:08 REPORT RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Duration: 10022.57 ms   Billed Duration: 10100 ms Memory Size: 128 MB   Max Memory Used: 19 MB

The function actually ran TWICE. After sleeping for 10 seconds the first time, it re-started when the handler method was called, essentially taking 20 seconds to finish execution but billing me for 10 seconds.

I ran it again, this time a warm-start and I got this:

X-ray output (warm start): AWS X-ray output - warm start

CloudWatch logs (warm start):

22:23:16 START RequestId: yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy Version: $LATEST
22:23:16 END RequestId: yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
22:23:16 REPORT RequestId: yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy Duration: 6.97 ms   Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 29 MB

Nothing suspicious there. I increased the function memory to 192 MB, saved it and reverted it back to 128 MB and saved it again to ensure that it'd start cold again and invoked it once again. The output of X-ray was the same as before but CloudWatch logs had something interesting:

22:30:13 starting import
22:30:24 START RequestId: zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz Version: $LATEST
22:30:24 starting import
22:30:34 calling handler
22:30:34 END RequestId: zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz
22:30:34 REPORT RequestId: zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz Duration: 10010.85 ms   Billed Duration: 10100 ms Memory Size: 128 MB   Max Memory Used: 19 MB

It seems while my code was in the middle of sleeping for 10 seconds, Lambda cut it off and re-started it. The execution time was again 20 seconds but I was billed for 10 seconds. So I thought what if instead of 1 sleep statement, I add 15 one second sleeps?

Updated code:

print "starting import"
import json
from time import sleep
for i in range(1, 16):
    sleep(1)
    print "completed {}th sleep".format(i)

print "calling handler"
def lambda_handler(event, context):
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

The function timed out!

X-ray output: enter image description here

CloudWatch logs:

22:51:54 starting import
22:51:55 completed 1th sleep
22:51:56 completed 2th sleep
22:51:57 completed 3th sleep
22:51:58 completed 4th sleep
22:51:59 completed 5th sleep
22:52:00 completed 6th sleep
22:52:01 completed 7th sleep
22:52:02 completed 8th sleep
22:52:03 completed 9th sleep
22:52:04 START RequestId: 11111111-1111-1111-1111-111111111111 Version: $LATEST
22:52:04 starting import
22:52:05 completed 1th sleep
22:52:06 completed 2th sleep
22:52:07 completed 3th sleep
22:52:08 completed 4th sleep
22:52:09 completed 5th sleep
22:52:10 completed 6th sleep
22:52:11 completed 7th sleep
22:52:12 completed 8th sleep
22:52:13 completed 9th sleep
22:52:14 completed 10th sleep
22:52:15 completed 11th sleep
22:52:16 completed 12th sleep
22:52:17 completed 13th sleep
22:52:18 completed 14th sleep
22:52:19 END RequestId: 11111111-1111-1111-1111-111111111111
22:52:19 REPORT RequestId: 11111111-1111-1111-1111-111111111111 Duration: 15015.16 ms   Billed Duration: 15000 ms Memory Size: 192 MB   Max Memory Used: 19 MB
22:52:19
2019-03-29T22:52:19.621Z 11111111-1111-1111-1111-111111111111 Task timed out after 15.02 seconds
22:52:19 starting import
22:52:20 completed 1th sleep
22:52:21 completed 2th sleep
22:52:22 completed 3th sleep
22:52:23 completed 4th sleep
22:52:24 completed 5th sleep
22:52:25 completed 6th sleep
22:52:26 completed 7th sleep
22:52:27 completed 8th sleep
22:52:28 completed 9th sleep
22:52:29 completed 10th sleep

It actually executed for 25.8 seconds but then timed out and billed me for 15 seconds. The code that executes before the handler is called ran for about 9 seconds then Lambda cut it off and re-started the function but didn't finish and ultimately timed out after 25.8 seconds. If I increase the Lambda timeout to 16 seconds, it finished executing in 25.8 seconds (as shown in X-Ray) and billed me for 15100 ms.

So this leads me to believe that if the handler function isn't called within about 9-10 seconds after initialization, Lambda will restart the function. So what if the code initialization takes less than 10 seconds?

Updated code:

print "starting import"
import json
from time import sleep
for i in range(1, 10):
    sleep(1)
    print "completed {}th sleep".format(i)

print "calling handler"
def lambda_handler(event, context):
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

I ran this function cold for about 10 times and my billed duration was always 100 ms. I even changed my lambda timeout to 1 second and it still finished executing successfully!

X-Ray output: enter image description here

CloudWatch logs:

23:23:43 starting import
23:23:44 completed 1th sleep
23:23:45 completed 2th sleep
23:23:46 completed 3th sleep
23:23:47 completed 4th sleep
23:23:48 completed 5th sleep
23:23:49 completed 6th sleep
23:23:50 completed 7th sleep
23:23:51 completed 8th sleep
23:23:52 completed 9th sleep
23:23:52 calling handler
23:23:52 START RequestId: 22222222-2222-2222-2222-222222222222 Version: $LATEST
23:23:52 END RequestId: 22222222-2222-2222-2222-222222222222
23:23:52 REPORT RequestId: 22222222-2222-2222-2222-222222222222 Duration: 0.73 ms   Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 44 MB

As Steve HOUEL rightfully pointed out, this leads me to believe that Lambda won't charge you for the time it takes to initialize your code (e.g. importing stuff) as long as it finishes in about 9 seconds. However, if it takes longer than that, Lambda re-starts your function and assuming you set a large enough timeout, function execution effectively takes 10 seconds + regular cold start execution time but you are still billed for just the cold start execution time without the added 10 seconds.

Vinayak
  • 1,103
  • 3
  • 18
  • 40
0

You will only pay for init if you spend more than 10s on it. In that case your init process will restart and you will start to pay for it.

But what you should know if that once your function has been warmed, you will not initialised it again (till around 45min of inactivity). You will then pay only the execution time.

Steve HOUEL
  • 197
  • 5
  • 1
    Where did you get that 45 mins of idle time from? Many blogs all over the internet have run benchmarks and it has been "proven" that idle containers die, approximately, after 5 minutes. It could eventually happen that a container could stay live for that long, but that's very unlikely. This is a nice one by acloud.guru: https://read.acloud.guru/how-long-does-aws-lambda-keep-your-idle-functions-around-before-a-cold-start-bf715d3b810 – Thales Minussi Mar 29 '19 at 09:43
  • Yes as you said and that blog article said in conclusion: "AWS Lambda will generally terminate functions after 45–60 mins of inactivity, although idle functions can sometimes be terminated a lot earlier to free up resources needed by other customers." – Steve HOUEL Mar 29 '19 at 09:50
  • This is a most-up-to-date article, also by Yan Cui, where he states it's around 5 minutes again. I think the link I sent you is outdated I think, so I am sorry for that. https://hackernoon.com/im-afraid-you-re-thinking-about-aws-lambda-cold-starts-all-wrong-7d907f278a4f. I will need to run the benchmarks again to check, but the 5 minutes thing is much more reasonable to me. Have you tested it yourself? – Thales Minussi Mar 29 '19 at 09:57
  • Hum, in that blog article he never spoke about that timeout, he only configure the ping event every 5-10min to keep a lambda warm (not because they get cold avec that). No real bench on my side, only multiple project experience. But as ou said, service evolve quickly so maybe it changed. – Steve HOUEL Mar 29 '19 at 10:04
  • "For these APIs, you can have a cron job that runs every 5–10 mins and pings the API (with a special ping request), so that by the time the API is used by a real user it’ll hopefully be warm and the user would not be the one to have to endure the cold start time." If they terminated in 45 mins only, running it every 44 mins would to the trick – Thales Minussi Mar 29 '19 at 10:09
  • It's not a constant. 45 minutes is the average for that, but your function can get cold sooner. I think only a recent bench could help us on that. I just found one (from Feb 2019) : https://kevinslin.com/aws/lambda_cold_start_idle/#results-us-east-1 – Steve HOUEL Mar 29 '19 at 10:12
  • Sorry for that but I am based on that blog post (French one but you can translate it): https://blog.xebia.fr/2017/11/20/serverless-aws-lambda-vous-saurez-tout-sur-le-cold-start/ which Otherwise could you tell me on what you disagree ? – Steve HOUEL Mar 29 '19 at 13:54
  • I think that there many mistakes in this blog post. I have posted another answer based on another blog post. On the bottom line - you are always being paid for initialization time. – Ronyis Mar 29 '19 at 14:12
  • This seems to be the correct answer. From my [little experiment](https://stackoverflow.com/a/55426800/1768141) it seems if you spend more than 9 seconds initializing code, Lambda will bill you for it. – Vinayak Mar 30 '19 at 00:08
  • Thx for your experiment, I think it’s very clear for everyone now. – Steve HOUEL Mar 31 '19 at 06:32
-1

It depends what you mean with initialisation time.

If you mean container startup, allocation, etc. you DO NOT pay for it.

If you mean code initialisation (requiring modules, connecting to DBs, etc), yes, you do pay for it.

I don't know about Python, but if you want to see it in action in NodeJS, import a module that has a blocking operation before exporting its functions.

For example, you can have this someModule.js file that contains the following code:

for (let i = 0; i < 10000000000; i++) {}
module.exports = {
    test: () => {}
}

The for loop is a blocking operation, therefore, module.exports will only be invoked once the loop is finished.

This means if you require('someModule) in your Handler, it will hang until someModule is done exporting everything.

const someModule = require('./someModule')

exports.handler = async event => {
    console.log(event)
}

You would then pay for the time taken to someModule to export its functions successfully.

Thales Minussi
  • 6,965
  • 1
  • 30
  • 48
  • Interesting. Could you try the same thing but instead of a for loop, have it sleep for exactly 9 seconds or less? In that case, are you still billed for the the time taken to require someModule? – Vinayak Mar 30 '19 at 00:07
-2

The things in Lambda you get for free are:

  • You get 1 Million requests per month.
  • 400,000 GB-seconds of compute time per month.

Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms.

The price depends on the amount of memory you allocate to your function.

Lambda counts a request each time it starts executing in response to an event notification or invokes call, including test invokes from the console. Hence, you are charged for the total number of requests across all your functions.

Also, at the program startup some actions to be done such as importing libraries, setting up DB and initializing global variables and classes. You DO PAY for this part of the Lambda initialization.

Aress Support
  • 1,315
  • 5
  • 12
  • That's right, but missing what was asked - you do pay for the initialization time of your code during cold start. – Ronyis Mar 29 '19 at 13:36
-2

EDITED: Your experiment looks valid.

I recommend reading this excellent post which includes information about how billing works on AWS Lambda runtime.

Ronyis
  • 1,803
  • 16
  • 17
  • I added an answer with the results of a little experiment I did that makes me believe that Lambda doesn't really charge for code initialization as long as it completes in about 9 seconds (I'm looking at CloudWatch logs to make this conclusion). I might be wrong but if you could look at my answer and see if that makes sense, that'd be great. – Vinayak Mar 29 '19 at 23:57
  • Looks correct, that's surprising... Edited the answer – Ronyis Mar 31 '19 at 08:36
  • Althought its from a while ago, this does not actually answer the question. The linked document might answer it, but not the post itself. – Ernesto Dec 09 '21 at 17:48