8

The parent process fails with errno=12(Out of memory) when it tries to fork a child. The parent process runs on Linux 3.0 kernel - SLES 11. At the point of forking the child, the parent process has already used up around 70% of the RAM(180GB/256GB). Is there any workaround for this problem?

The application is written in C++, compiled with g++ 4.6.3.

Bhargav Rao
  • 50,140
  • 28
  • 121
  • 140
Anirudh Jayakumar
  • 1,171
  • 3
  • 15
  • 37

3 Answers3

13

Maybe virtual memory over commit is prevented in your system.

If it is prevented, then the virtual memory can not be bigger than sizeof physical RAM + swap. If it is allowed, then virtual memory can be bigger than RAM+swap.

When your process forks, your processes (parent and child) would have 2*180GB of virtual memory (that is too much if you don't have swap).

So, allow over commit by this way:

 echo 1 > /proc/sys/vm/overcommit_memory

It should help, if child process execves immediately, or frees allocated memory before the parent writes too much to own memory. So, be careful, out of memory killer may act if both processes keep using all the memory.

man page of proc(5) says:

/proc/sys/vm/overcommit_memory

This file contains the kernel virtual memory accounting mode. Values are: 0: heuristic overcommit (this is the default) 1: always overcommit, never check 2: always check, never overcommit

In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked, and the default check is very weak, leading to the risk of getting a process "OOM-killed". Under Linux 2.4 any nonzero value implies mode 1. In mode 2 (available since Linux 2.6), the total virtual address space on the system is limited to (SS + RAM*(r/100)), where SS is the size of the swap space, and RAM is the size of the physical memory, and r is the contents of the file /proc/sys/vm/overcommit_ratio.

More information here: Overcommit Memory in SLES

SKi
  • 8,007
  • 2
  • 26
  • 57
1

fork-ing requires resources, since it is copy-on-writing the writable pages of the process. Read again the fork(2) man page.

You could at least provide a huge temporary swap file. You could create (on some file system with enough space) a huge file $SWAPFILE with

  dd if=/dev/zero of=$SWAPFILE bs=1M count=256000
  mkswap $SWAPFILE
  swapon $SWAPFILE

Otherwise, you could for instance design your program differently, e.g. mmap-ing some big file (and munmap-ing it just before the fork, and mmap-ing it again after), or more simply starting at the beginning of your program a popen-ed shell, or a p2open-ed one or making explicitly the pipe-s to and from it (probably a multiplexing call à la poll would also be useful), and later issue commands to it.

Maybe we could help more if we had an idea of what your program is doing, why does it consume so much memory, and why and what is it forking...

Read Advanced Linux Programming for more.

PS.

If you fork just to run gdb to show the backtrace, consider simpler alternatives like recent GCC's libbacktrace or Wolf's libbacktrace...

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
  • I'm trying to invoke a non-interactive gdb session to get full backtrace. Huge memory usage is due to the requirements of the application. It is expected and not due to any leaks. – Anirudh Jayakumar Mar 25 '13 at 11:58
  • the suggestion of starting a popen()ed shell seems to be a nice idea but is it possible to start a bi-directional pipe for read and write? – Anirudh Jayakumar Mar 25 '13 at 12:01
  • You could also use `libbacktrace` or simply run `gdb` interactively with its `-p $pidnumber` argument. – Basile Starynkevitch Mar 25 '13 at 19:00
0

A nicer solution on Linux would be to use vfork or posix_spawn (as it will try to use vfork if possible): vfork "creates new processes without copying the page tables of the parent process", so it will work even if your application uses more than 50% of RAM available.

Note that std::system and QProcess::execute also use fork under the hood, there is even a ticket about this problem in Qt framework: https://bugreports.qt.io/browse/QTBUG-17331