Ahoy! I have a python wrapper that collects some command line options, embed a subset of them in a file, and calls a subprocess by passing the file name as input and the remaining command line options as options. Then it processes the output and prints it out in a different format. The subprocess is called like this:
# generate cfg file
cfg_file = open(cfg_file_name, "w")
...
# call executable
command = "./%s -time %s -model %s" % (executable, args.time, args.model)
if args.branching != None:
command += " -branching %s" % args.branching
command += " %s" % (cfg_file_name)
output = run_and_report(command)
# process output
...
Where run_and_report
is defined as:
def run_and_report(cmd):
"""Run command on the shell, report stdout, stderr"""
proc = run(cmd)
proc.wait()
output = "".join(map(lambda x: x.rstrip(), proc.stdout))
return output
and run
as
def run(cmd):
"""Open process"""
return Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE, close_fds=True)
The wrapper itself is called in a similar fashion by a higher level procedure that, every now and then, needs to kill some of the wrapper processes it has spawn. My problem is that sometimes killing the wrapper seems to leave the executable
running, so the wrapper is effectively killed, but the underlying process is not. However, as far as I know it is not possible to catch a SIGKILL in python as you do with other interrupts so, does anyone know a way to ensure the underlying process is killed?
Thanks,
Tunnuz