48

I've tried running things like this:

subprocess.Popen(['nohup', 'my_command'],
                 stdout=open('/dev/null', 'w'),
                 stderr=open('logfile.log', 'a'))

This works if the parent script exits gracefully, but if I kill the script (Ctrl-C), all my child processes are killed too. Is there a way to avoid this?

The platforms I care about are OS X and Linux, using Python 2.6 and Python 2.7.

James
  • 24,676
  • 13
  • 84
  • 130

5 Answers5

65

The child process receives the same SIGINT as your parent process because it's in the same process group. You can put the child in its own process group by calling os.setpgrp() in the child process. Popen's preexec_fn argument is useful here:

subprocess.Popen(['nohup', 'my_command'],
                 stdout=open('/dev/null', 'w'),
                 stderr=open('logfile.log', 'a'),
                 preexec_fn=os.setpgrp
                 )

(preexec_fn is for un*x-oids only. There appears to be a rough equivalent for Windows "creationflags=CREATE_NEW_PROCESS_GROUP", but I've never tried it.)

CarenRose
  • 1,266
  • 1
  • 12
  • 24
JonMc
  • 659
  • 1
  • 5
  • 3
  • Thanks for your answer; it works for me! However, I am curious why my command stops (process dies) after some point, if I omit the `stdout` & `stderr` agruments. – danuker Dec 08 '17 at 13:09
  • 1
    maybe the stdout and stderr buffers fill up and the process becomes deadlocked? – Tom May 21 '18 at 00:01
  • 1
    If you are using `shell=True` then `creationflags=subprocess.CREATE_NEW_CONSOLE` is probably what you want – Pro Q Oct 13 '18 at 18:27
  • 2
    Is `nohup` needed if you call `setpgrp`? Wouldn't the latter prevent the child from getting `SIGHUP` from the parent, as it is no longer part of the same process group? – rcorre Oct 30 '19 at 20:22
  • It's not clear to me when these `open`'s are closed -- if at all. To me, this is at least implicit behavior and I would bundle them with `with` as written [below](https://stackoverflow.com/a/60514518/3383640). – Suuuehgi Mar 03 '20 at 19:53
32

The usual way to do this on Unix systems is to fork and exit if you're the parent. Have a look at os.fork() .

Here's a function that does the job:

def spawnDaemon(func):
    # do the UNIX double-fork magic, see Stevens' "Advanced 
    # Programming in the UNIX Environment" for details (ISBN 0201563177)
    try: 
        pid = os.fork() 
        if pid > 0:
            # parent process, return and keep running
            return
    except OSError, e:
        print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror) 
        sys.exit(1)

    os.setsid()

    # do second fork
    try: 
        pid = os.fork() 
        if pid > 0:
            # exit from second parent
            sys.exit(0) 
    except OSError, e: 
        print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror) 
        sys.exit(1)

    # do stuff
    func()

    # all done
    os._exit(os.EX_OK)
Kiwy
  • 340
  • 2
  • 10
  • 43
edoloughlin
  • 5,821
  • 4
  • 32
  • 61
  • If I fork, and then I kill one half of the fork (rather than allowing it to exit), will that kill the new process? – James May 15 '11 at 22:59
  • 1
    Okay, after further reading: this requires forking twice to avoid receiving signals? I'd quite like the parent process to remain interactive --- its job is to monitor the processes that it spawns --- which isn't possible if it has to disown the shell. – James May 15 '11 at 23:02
  • Thanks! I've added my implementation to your answer. – James May 20 '11 at 14:10
  • 1
    This is great as it sets the daemon **parent process ID** to 1 so that it's completely disconnected from the parent. The subprocess command I ran from the other answer was killed by my Torque job scheduler, even when changing its process group because the parent process ID still matched the dying process. – storm_m2138 Feb 16 '17 at 19:45
  • 1
    In this implementation an intermediate child is left as a zombie until parent exists. You need to collect its return code in the parent process to avoid that, e.g. by calling `os.waitid(os.P_PID, pid, os.WEXITED)` (before returning in the main process) – nirvana-msu Jan 22 '20 at 10:05
14

After an hour of various attempts, this works for me:

process = subprocess.Popen(["someprocess"], creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP)

It's solution for windows.

Danil Shaykhutdinov
  • 2,027
  • 21
  • 26
4

Since 3.2 you can also use start_new_session flag (POSIX only).

import subprocess

p = subprocess.Popen(["sleep", "60"], start_new_session=True)
ret = p.wait()

See start_new_session in Popen constructor

Zhiwei Huang
  • 176
  • 1
  • 5
  • 1
    Yes, but note that the parent process of p still is the calling process. And of course the OP does not want to `p.wait()`. And if p fails and it still has the calling process as its parent, then it will become a zombie process. – Wolfgang Kuehn Feb 21 '21 at 18:25
-1
with open('/dev/null', 'w') as stdout, open('logfile.log', 'a') as stderr:
    subprocess.Popen(['my', 'command'], stdout=stdout, stderr=stderr)

class subprocess.Popen(...)

Execute a child program in a new process. On POSIX, the class uses os.execvp()-like behavior to execute the child program. On Windows, the class uses the Windows CreateProcess() function.

os.execvpe(file, args, env)

These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.

Suuuehgi
  • 4,547
  • 3
  • 27
  • 32