I have a python script that needs to load lot's of data by calling lot's of external commands. After a couple hours this always crashes with this:
....
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
File "/usr/lib/python2.6/subprocess.py", line 621, in __init__
errread, errwrite)
File "/usr/lib/python2.6/subprocess.py", line 1037, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Not enough space
abort: Not enough space
... even though the machine does have much more memory available than the script is actually using.
The root cause seems to be that every fork() actually requires twice as much memory than the parent process that is immediately released by calling of exec() (see: http://www.oracle.com/technetwork/server-storage/solaris10/subprocess-136439.html) ... in my case the above is even worse because I'm loading the data in multiple threads.
So do you see any creative way how to workaround this issue?