According to the answers to another question in stackoverflow (how to kill (or avoid) zombie processes with subprocess module) one can avoid zombie processes by using the command subprocess.Popen.wait()
.
However, when I run the following function perform_sth
inside my script a several thousand times, the memory usage of each individual process tends to increase:
For example, the first process only needs 7 MB, but nr. 1000 already 500 MB, until in the end more than 8 GB are used and I have to kill the whole Python script. The process should always use more or less the same amount of memory.
Probably I have a flaw in my function and need to additionally kill the processes?
My code is:
def perform_sth(arg1, arg2):
import subprocess
sth_cline = ["sth", "-asequence=%s"%arg1, "-bsequence=%s"]
process = subprocess.Popen(
sth_cline,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE
)
process.wait()
return