4

Assuming there are sufficient virtual memory address in the process.

Considering 64 bit system has almost infinite virtual address, and if there are still available phyiscal memory space in the OS memory pool, can we assume there is zero chance of failure of memory allocation?

user0002128
  • 2,785
  • 2
  • 23
  • 40
  • 4
    You might want to read about [memory fragmentation](http://en.wikipedia.org/wiki/Memory_fragmentation). And even if the virtual address *space* is very large, the amount of memory is still limited to the physical RAM plus swap-space, which might be less than you think. – Some programmer dude Nov 13 '13 at 08:19
  • 3
    Systems may support limits on memory they're *prepared* to let applications use - even if available - but details are OS specific and your question doesn't specify an OS. Apps/libs may fail if only compiled for 32-bit modes. Fragmentation and rounding may result in wasteful/inflated memory usage, but are unlikely to result in a particularly early allocation failure outside pathological use cases (e.g. malloc N until memory almost full, free every second block, malloc 2N might fail, but you're still talking about having actually used a significant fraction of the overall memory). – Tony Delroy Nov 13 '13 at 08:30
  • 1
    "almost infinite virtual address"? Folks use to say that about 1 MByte systems. :-) – chux - Reinstate Monica Nov 13 '13 at 13:41

1 Answers1

6

It depends. You could limit (e.g. with setrlimit(2) on Linux) a process to avoid using all the resources, and there are good reasons to put such limits (e.g. avoid a buggy program to eat all resources, to leave some to other more important processes).

Hence, a well behaved program should always test memory allocation (e.g. malloc(3) or operator new both often based on lower-level syscalls like mmap(2) ...). And of course the resources are not infinite (at most physical RAM + swap space).

Often, the only thing to do on memory exhaustion is to abort the program with a nice message (understandable by sysadmins). Doing more fancy things is much more difficult but possible (and required in a server program, because you want to continue serving other requests...).

Hence, you should write in C:

 void* p = malloc(somesize);
 if (!p) { perror("malloc"); exit(EXIT_FAILURE); };

You could use _exit or abort if you are scared of terminators registered thru atexit(3) doing malloc ... but I won't bother.

Often, a routine doing the above is called xmalloc for historical reasons.

And in C++ operator new may fail by throwing an std::bad_alloc exception (or by giving nullptr if you use new(std::nothrow), see std::nothrow for more).

Learn more about memory overcommit on Linux, virtual memory, and as Joachim Pileborg commented, memory fragmentation. Read about garbage collection.

Community
  • 1
  • 1
Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547