Many programs allocate memory in big chunks and then only use part of it. Implementations of malloc()
and new
probably do that internally even if your program doesn't do it itself.
This lead to programs getting more memory than they actually use and the system runs out of memory quicker. So modern OSes don't actually assign any memory when the program calls malloc()
or new
but just mark down that the program is allowed to access a certain part of the address space. Only later when the program tries to write to that part the kernel will assign actual memory to the needed address (usually with a granularity of 4k).
Your code allocates but never uses any memory. So the kernel just gives you an IOU without ever having to give the program any actual memory. This is often called memory overcommit. Windows does this by default. Linux has settings for this in /proc/sys/vm/overcommit_*
to fine tune it to your needs.
Note: If you actually use the memory the system will first assign physical memory to the parts you use. When it runs out it will use swap. When that also runs out your program will get a segfault.
A system with memory overcommit means malloc()
and new
basically never ever fail and dealing with out-of-memory becomes impossible since your program simply dies at some unspecified later time. Even the user opening a webpage in a browser can mean your program gets killed because when it runs out of memory the system will pick some (more or less random) task to kill to regain some.
This means two things:
A normal program can simply ignore out-of-memory. It never happens or, if it does, it can't handle it anyway. Don't waste your time catching the bad_alloc
exception.
A critical program must make sure the system doesn't use memory overcommit or any attempt to deal with out-of-memory will just fail.