2

Updated post:

I have a python web application running on a port. It is used to monitor some other processes and one of its features is to allow users to restart his own processes. The restart is done through invoking a bash script, which will proceed to restart those processes and run them in the background.

The problem is, whenever I kill off the python web application after I have used it to restart any user's processes, those processes will take take over the port used by the python web application in a round-robin fashion, so I am unable to restart the python web application due to the port being bounded. As a result, I must kill off the processes involved in the restart until nothing occupies the port the python web application uses.

Everything is ok except for those processes occupying the port. That is really undesirable.

Processes that may be restarted:

  1. redis-server
  2. newrelic-admin run-program (which spawns another web application)
  3. a python worker process

UPDATE (6 June 2013): I have managed to solve this problem. Look at my answer below.


Original Post:

I have a python web application running on a port. This python program has a function that calls a bash script. The bash script spawns a few background processes, then exits.

The problem is, whenever I kill the python program, the background processes spawned by the bash script will take over and occupy that same port.

Specifically the subprocesses are:

  1. a redis server (with daemonize = true in the configuration file)
  2. newrelic-admin run-program (spawns a web application)
  3. a python worker process

Update 2: I've tried running these with nohup. Only the python worker process doesnt attempt to take over the port after I kill the python web application. The redis server and newrelic-admin still do.

I observed this problem when I was using subprocess.call in the python program to run the bash script. I've tried a double fork method in the python program before running the bash script, but it results in the same problem.

How can I prevent any processes spawned from the bash script from taking over the port?

Thank you.

Update: My intention is that, those processes spawned by the bash script should continue running if the python application is killed off. Currently, they do continue running after I kill off the python application. The problem is, when I kill off the python application, the processes spawned by the bash script start to take over the port in a round-robin fashion.

Update 3: Based on the output I see from 'pstree' and 'ps -axf', processes 1 and 2 (the redis server and the web app spawned by newrelic-admin run-program) are not child processes of the python web application. This makes it even weirder that they take over the port that the python web application occupies when I kill it... Anyone knows why?

yanhan
  • 3,507
  • 3
  • 28
  • 38
  • you can use [`nohup`](http://linux.die.net/man/1/nohup) from the bash script – Elazar Jun 03 '13 at 04:55
  • See http://stackoverflow.com/questions/14128410/killing-child-process-when-parent-crashes-in-python – devnull Jun 03 '13 at 04:56
  • @Elazar tried nohup, it doesnt work for me... the spawned processes still take over in a round robin manner – yanhan Jun 03 '13 at 05:04
  • @yanhan yes, I did not read your question thoroughly. sorry. – Elazar Jun 03 '13 at 05:08
  • I posted some updates that might help – yanhan Jun 03 '13 at 05:10
  • How do you launch the subprocesses? Are you taking care to have all file descriptors redirected to/from `/dev/null`? – tripleee Jun 03 '13 at 06:41
  • I am using subprocess.call to run a shell script that will spawn those processes. Will I need to redirect the file descriptors in this case? – yanhan Jun 03 '13 at 06:49
  • What do you mean by "take over the port"? – Armin Rigo Jun 03 '13 at 10:31
  • By "take over the port", I meant that when I did a 'netstat -nltp', the output shows that the port used by the original python web application (now killed) is taken up by another "child" process it spawned (eg. the redis server). – yanhan Jun 04 '13 at 02:10
  • Solved now. Look at my answer. I think I might have phrased my question very badly so perhaps I should rephrase it. – yanhan Jun 06 '13 at 05:00
  • have you tried `close_fds=True`, [`start_new_session`](http://stackoverflow.com/a/13256908/4279), looked at how [circus](https://pypi.python.org/pypi/circus), [supervisord](http://supervisord.org/) do their thing? – jfs Jun 09 '13 at 08:34
  • Hi J.F. Sebastian, I guess I will only look into those if I have the time. Currently using screen has solved my problem. – yanhan Jun 14 '13 at 06:24

1 Answers1

0

Just some background on the methods I've tried to solve my above problem, before I go on to the answer proper:

  1. subprocess.call
  2. subprocess.Popen
  3. execve
  4. the double fork method along with one of the above (http://code.activestate.com/recipes/278731-creating-a-daemon-the-python-way/)

By the way, none of the above worked for me. Whenever I killed off the web application that executes the bash script (which in turns spawns some background processes we shall denote as Q now), the processes in Q will in a round-robin fashion, take over the port occupied by the web application, so I had to kill them one by one before I could restart my web application.

After many days of living with this problem and moving on to other parts of my project, I thought of some random Stack Overflow posts and other articles on the Internet and from my own experience, recalled my experience of ssh'ing into a remote and starting a detached screen session, then logging out, and logging in again some time later to discover the screen session still alive.

So I thought, hey, what the heck, nothing works so far, so I might as well try using screen to see if it can solve my problem. And to my great surprise and joy it does! So I am posting this solution hopefully to help those who are facing the same issue.

In the bash script, I simply started the processes using a named screen process. For instance, for the redis application, I might start it like this:

screen -dmS redisScreenName redis-server redis.conf

So those processes will keep running on those detached screen sessions they were started with. In this case, I did not daemonize the redis process.

To kill the screen process, I used:

screen -S redisScreenName -X quit

However, this does not kill the redis-server. So I had to kill it separately.

Now, in the python web application, I can just use subprocess.call to execute the bash script, which will spawn detached screen sessions (using 'screen -dmS') which run the processes I want to spawn. And when I kill off the python web application, none of the spawned processes take over its port. Everything works smoothly.

yanhan
  • 3,507
  • 3
  • 28
  • 38