1

Okay I'm officially out of ideas after running each and every sample I could find on google up to 19th page. I have a "provider" script. The goal of this python script is to start up other services that run indefinitely even after this "provider" stopped running. Basically, start the process then forget about it but continue the script and not stopping it...

My problem: python-daemon... I have actions (web-service calls to start/stop/get status from the started services). I create the start commands on the fly and perform variable substitution on the config files as required.

Let's start from this point: I have a command to run (A bash script that executes a java process - a long running service that will be stopped sometime later).

def start(command, working_directory):
    pidfile = os.path.join(working_directory, 'application.pid') 
    # I expect the pid of the started application to be here. The file is not created. Nothing is there.

    context = daemon.DaemonContext(working_directory=working_directory,
                                   pidfile=daemon.pidfile.PIDLockFile(pidfile))
    with context:
        psutil.Popen(command)

    # This part never runs. Even if I put a simple print statement at this point, that never appears. Debugging in pycharms shows that my script returns with 0 on with context
    with open(pidfile, 'r') as pf:
        pid = pf.read()

    return pid

From here on in my caller to this method I prepare a json object to return to the client which essentially contains an instance_id (don't mind it) and a pid (that'll be used to stop this process in another request.

What happens? After with context my application exits with status 0, nothing is returned, no json response gets created, no pidfile gets created only the executed psutil.Popen command runs. How can I achieve what I need? I need an independently running process and need to know its PID in order to stop it later on. The executed process must run even if the current python script stops for some reason. I can't get around the shell script as that application is not mine I have to use what I have.

Thanks for any tip!

@Edit: I tried using simply the Popen from psutil/subprocess with somewhat more promising result.

def start(self, command):
    import psutil/subprocess

    proc = psutil.Popen(command)

    return str(proc.pid)

Now If I debug the application and wait some undefined time on the return statement everything is working great! The service is running the pid is there, I can stop later on. Then I simply ran the provider without debugging. It returns the pid but the process is not running. Seems like Popen has no time to start the service because the whole provider stops faster.

@Update: Using os.fork:

@staticmethod
def __start_process(command, working_directory):
    pid = os.fork()
    if pid == 0:
        os.chdir(working_directory)
        proc = psutil.Popen(command)
        with open('application.pid', 'w') as pf:
            pf.write(proc.pid)

def start(self):
    ...
    __start_process(command, working_directory)
    with open(os.path.join(working_directory, 'application.pid'), 'r') as pf:
        pid = int(pf.read())

    proc = psutil.Process(pid)
    print("RUNNING" if proc.status() == psutil.STATUS_RUNNING else "...")

After running the above sample, RUNNING is written on console. After the main script exits because I'm not fast enough:

ps auxf | grep No instances are running...

Checking the pidfile; sure it's there it was created

cat /application.pid EMPTY 0bytes

Display name
  • 637
  • 1
  • 7
  • 16

2 Answers2

2

From multiple partial tips i got, finally managed to get it working...

def start(command, working_directory):
    pid = os.fork()
    if pid == 0:
        os.setsid()
        os.umask(0) # I'm not sure about this, not on my notebook at the moment
        os.execv(command[0], command) # This was strange as i needed to use the name of the shell script twice: command argv[0] [args]. Upon using ksh as command i got a nice error...
    else:
        with open(os.path.join(working_directory, 'application.pid'), 'w') as pf:
            pf.write(str(pid))

        return pid

That together solved the issue. The started process is not a child process of the running python script and won't stop when the script terminates.

Display name
  • 637
  • 1
  • 7
  • 16
1

Have you tried with os.fork()?

In a nutshell, os.fork() spawns a new process and returns the PID of that new process.

You could do something like this:

#!/usr/bin/env python

import os
import subprocess
import sys
import time

command = 'ls' # YOUR COMMAND
working_directory = '/etc' #  YOUR WORKING DIRECTORY

def child(command, directory):
    print "I'm the child process, will execute '%s' in '%s'" % (command, directory)
    # Change working directory
    os.chdir(directory)
    # Execute command
    cmd = subprocess.Popen(command
        , shell=True
        , stdout=subprocess.PIPE
        , stderr=subprocess.PIPE
        , stdin=subprocess.PIPE
    )
    # Retrieve output and error(s), if any
    output = cmd.stdout.read() + cmd.stderr.read()
    print output
    # Exiting
    print 'Child process ending now'
    sys.exit(0)

def main():
    print "I'm the main process"
    pid = os.fork()
    if pid == 0:
        child(command, working_directory)
    else:
        print 'A subprocess was created with PID: %s' % pid
        # Do stuff here ...
        time.sleep(5)
        print 'Main process ending now.'
        sys.exit(0)

if __name__ == '__main__':
    main()

Further info:

joegalaxian
  • 350
  • 1
  • 7