1

I'm working on a Python script that is supposed to restart itself.

This is what I do in the Python script

os.execl('run.sh', '')

My run.sh then looks like this

#!/bin/bash
sudo fuser -ku 8000/tcp
python /home/app.py

The reason I use sudo fuser -ku 8000/tcp is because it was kind of easy because my python script was the only one using port 8000.

When I run the python script it does this and stops.

8000/tcp:             7587(pi)  7596(pi)  7597(pi)  7605(pi)  7606(root)
./run.sh: line 3:  7587 Killed                  python /home/app.py

It never restarts the python script.

Filip
  • 2,514
  • 17
  • 28

3 Answers3

2

As per the os.execl() documentation, this function replaces the current Python process, and never returns. So you have this:

  • Python is running, calls execl(run.sh), now no longer running
  • run.sh is running, uses sudo for some ungodly reason (same user, can kill without sudo!), tries to look up the Python program using its port number (?!), and kills it (or not, since you don't handle errors in your shell script at all).
  • run.sh (if it's still alive after all that killing) tries to start the Python script.

This is a terribly convoluted way of doing things. Instead, you should simply replace the Python process with itself:

os.execlp('python', '/home/app.py')

For bonus points, you could get the current Python and script (e.g. using sys.argv) and just use those instead of hard-coding them. For full details on that, see here: Restarting a self-updating python script

Community
  • 1
  • 1
John Zwinck
  • 239,568
  • 38
  • 324
  • 436
  • Thank you! You solution takes me half way there. It does restart the python script but the Flask server i run in the script gives me this error: error: [Errno 98] Address already in use – Filip Sep 30 '14 at 13:13
  • Look up `SO_REUSEADDR` to fix that. – John Zwinck Sep 30 '14 at 13:14
  • I can't seem to find anything like that for Flask, would to expose the socket used by Flask in some way? – Filip Sep 30 '14 at 13:25
0

When the python code is executed it creates sys.argv.

when Python is invoked, it sets sys.argv to everything but it's own executable.

So You will need to use like os.execlp('python', 'the_directory')

Please see here for a detailed answer in its working.

Community
  • 1
  • 1
Avinash Babu
  • 6,171
  • 3
  • 21
  • 26
0

This doesn't answer exactly your question, but if what you want to achieve is a service which answers TCP requests with zero downtime and be able to update it, the only proper solution is to pass file descriptors across processes through a UNIX socket.

The skeleton of the code could be like this:

  • python script starts and tries to bind a certain unix socket. The name of the socket shall start with a zero byte, because that will ensure the socket dies when the process does. (this is refereed to as the Linux Abstract Socket Namespace)
    • If it succeeds it means there is no previous running instance of the server. Bind the TCP server to port 8000 and accept connections and process those requests. Listen on the UNIX socket for the next python instance arrival. when one connects on that UNIX socket, stop accepting incoming connections on the TCP server socket, then pass that file descriptor to the next instance and then all the file descriptor of the other clients (or if requests are quick just finish to process them). Then die.
    • If it fails it means there is a previous running instance. Then connect to it, say hello and get all the file descriptors through it. The first one is the TCP server socket, so start accepting new connections on it. The other file descriptors are ones of already connected clients ...

With that method it is possible to build a true transparent restart of your server. You need an additional library to pass file descriptors from one process to an other one like this: http://code.google.com/p/python-passfd/

Scout
  • 550
  • 2
  • 5