10

The following program is killed by the kernel when the memory is ran out. I would like to know when the global variable should be assigned to "ENOMEM".

#define MEGABYTE 1024*1024
#define TRUE 1
int main(int argc, char *argv[]){

    void *myblock = NULL;
    int count = 0;

    while(TRUE)
    {
            myblock = (void *) malloc(MEGABYTE);
            if (!myblock) break;
            memset(myblock,1, MEGABYTE);
            printf("Currently allocating %d MB\n",++count);
    }
    exit(0);
}
Cacho Santa
  • 6,846
  • 6
  • 41
  • 73
venus.w
  • 2,171
  • 7
  • 28
  • 42
  • 7
    Just as additional hints. Don't cast the return of `malloc`. Casting it to `void*` is particularly weird since that *is* the return type. If you feel the need for it, you probably forgot to include "stdlib.h". Then modern C compilers (and on linux all are modern in that sense) have a Boolean type. Include "stdbool.h" and use `bool`, `false` and `true` appropriately. – Jens Gustedt Jun 10 '12 at 05:30

4 Answers4

6

First, fix your kernel not to overcommit:

echo "2" > /proc/sys/vm/overcommit_memory

Now malloc should behave properly.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • 3
    +1, this answer is the correct one, although it doesn't explain why :) To give you a bit more information what is happening on modern linux systems, if you don't do what R.. suggests. An allocation then just reserves a range of virtual addresses for the process and doesn't allocate the pages themselves. These are only really claimed from the kernel when you access them for the first time. – Jens Gustedt Jun 10 '12 at 05:33
  • Even with my fix, the kernel doesn't allocate the pages themselves right away. It just accounts for how many will be needed and makes sure never to commit more than can (later) be satisfied. – R.. GitHub STOP HELPING ICE Jun 10 '12 at 13:41
  • This managed to just completely break my CentOS box and required a restart :/ – Matt Fletcher Dec 12 '14 at 12:16
  • 2
    @MattFletcher: You probably had a lot of bloated desktop software running with more memory already allocated than could be committed. :/ – R.. GitHub STOP HELPING ICE Dec 12 '14 at 20:40
  • Nope, pretty clean rackspace server. Just happened to only have 512mb RAM! – Matt Fletcher Dec 12 '14 at 21:15
  • Also look at `/proc/sys/vm/overcommit_ratio` to understand how many memory can be overcommitted. – Stanislav Ivanov May 18 '21 at 11:56
6

It happens when you try to allocate too much memory at once.

#include <stdlib.h>
#include <stdio.h>
#include <errno.h>

int main(int argc, char *argv[])
{
  void *p;

  p = malloc(1024L * 1024 * 1024 * 1024);
  if(p == NULL)
  {
    printf("%d\n", errno);
    perror("malloc");
  }
}

In your case the OOM killer is getting to the process first.

Alain O'Dea
  • 21,033
  • 1
  • 58
  • 84
Ignacio Vazquez-Abrams
  • 776,304
  • 153
  • 1,341
  • 1,358
5

As "R" hinted, the problem is the default behaviour of Linux memory management, which is "overcommiting". This means that the kernel claims to allocate you memory successfuly, but doesn't actually allocate the memory until later when you try to access it. If the kernel finds out that it's allocated too much memory, it kills a process with "the OOM (Out Of Memory) killer" to free up some memory. The way it picks the process to kill is complicated, but if you have just allocated most of the memory in the system, it's probably going to be your process that gets the bullet.

If you think this sounds crazy, some people would agree with you.

To get it to behave as you expect, as R said:

echo "2" > /proc/sys/vm/overcommit_memory

blueshift
  • 6,742
  • 2
  • 39
  • 63
  • this is the most disturbing thing I've come to realize in linux kernel, why is the memory allocation designed like this? why not just check availability before allocation? – Sajuuk Mar 27 '19 at 11:11
  • @Sajuuk because this is necessary, I'd rather ask why none of answers mention the perils of setting `overcommit_memory` to 2. Unless we are talking servers, many simple desktop apps overallocate virtual memory. E.g. some `chrome` processes has VSS = 20G. Evolution has 99.5G on my system. But the record holder is address sanitizer: even a simple "hello world" built with it is [gonna take 20T of virtual memory](https://github.com/google/sanitizers/issues/704#issuecomment-237445176). Have you got 20T free RAM? – Hi-Angel Aug 04 '21 at 22:02
3

I think errno will be set to ENOMEM:

Macro defined in stdio.h. Here is the documentation.

#define ENOMEM          12      /* Out of Memory */

After you call malloc in this statement:

myblock = (void *) malloc(MEGABYTE);

And the function returns NULL -because system is out of memory -.

I found this SO question very interesting.

Hope it helps!

Community
  • 1
  • 1
Cacho Santa
  • 6,846
  • 6
  • 41
  • 73