1

I am running a lambda container function. The lambda function is configured with 10240MB of memory. On execution of the lambda, my container is being killed with a SIGSEGV error, meaning it has run out of memory. However, the logs show that the max memory used by the lambda function was only 1473MB (well below the 10240MB lambda function memory).

Do containers have access to all of the memory the lambda function has access to by default?

For instance, with the docker run command, one can specify memory and cpu limits:

docker run --memory 10240m container-name

However, there is no access to this run command using lambda.

Logs:

2023-05-17T15:28:15.643+01:00 error: ffmpeg was killed with signal SIGSEGV

EPORT RequestId: 41dfc700-a47b-41b5-9df5-4166a8829780   Duration: 47973.41 ms   Billed Duration: 49156 ms   Memory Size: 10240 MB   Max Memory Used: 1473 MB    Init Duration: 1181.64 ms
cjd
  • 43
  • 5
  • Yes the container should have access to all the memory allocated. According to this, you may need to adjust the parameters you pass to your ffmpeg command: https://stackoverflow.com/questions/75649832/ffmpeg-hls-conversion-error-on-aws-lambda-ffmpeg-was-killed-with-signal-sigseg Also that error doesn't always mean a process ran out of RAM. – Mark B May 17 '23 at 15:30
  • Thank you for the response @MarkB! When I run this locally, it works no problem. Running with: ``` docker run --memory 10240m container-name ``` – cjd May 17 '23 at 15:58
  • 1
    "SIGSEGV error, meaning it has run out of memory" -- that's not what SIGSEGV means. It means that the program accessed memory that it shouldn't. I suppose that it _might_ happen as the result of a failed `malloc()` (which returns `NULL`), but more likely there's a different problem. – kdgregory May 17 '23 at 16:52

0 Answers0