0

I am spawning a python process using MultiProcess library and within it I am spawning around 100 threads( of 3 types each performing a separate function and using a Queue).

I want my process to neatly terminate when a db flag is raised killing all the threads and the process itself and also set a db level terminated flag once everything's ended.

Here's my approach: As soon as a flag is set in the database( I can poll the db at the end of the process code after spawning the threads), I can exit the poll loop which exits my process. To kill all the threads, I'd have to keep an array of thread-id's spawned and send a kill to all of them. I need that all the threads will get killed along with their connections(threads are connected to mysql,mongo and there's a websocket connection thread to another server).

If someone has a better approach or if there's any drawback in my approach, please suggest.

crazydiv
  • 812
  • 9
  • 30
  • 2
    You are mixing up multithreading and multiprocessing. If you are using the module `multiprocessing` then you probably won't have any threads but just processes. These can either terminate themselves or get killed from the outside by sending a signal. If you are having threads instead, these will automatically die if the process terminates; then you will not have to do anything else but terminate the father. – Alfe Mar 12 '14 at 08:39
  • I have threads implemented with Threading created inside the process which is created with Multiprocessing. I want my process and threads to die when a flag is raised. If I send a kill signal for those threads inside my process, I am not sure if the existing db/websocket connections will terminate properly i.e. connection is closed. – crazydiv Mar 12 '14 at 08:45
  • With sockets you might like to check the `SO_LINGER` socket option. – cdarke Mar 12 '14 at 09:21
  • related: [Killing all threads and the process from a thread of the same process](http://stackoverflow.com/q/21671753/4279) – jfs Mar 12 '14 at 10:17
  • You just have to make sure that the processes get killed. Each process will then pull down all its threads (in a very uncontrolled way). – Alfe Mar 12 '14 at 13:12

2 Answers2

2

This is something you can try, but no guarantee's . This function kill the whole process tree.

import psutil , os
def kill_proc_tree(pid, including_parent=True):    
    parent = psutil.Process(pid)
    for child in parent.get_children(recursive=True):
        child.kill()
    if including_parent:
        # Set the db terminate flag.

        # the next step will kill the whole process
        parent.kill()

# when Db flag iss set run this function
def terminate():
    me = os.getpid()
    kill_proc_tree(me)
thecreator232
  • 2,145
  • 1
  • 36
  • 51
1

You only need to kill the process, and all you need to do is properly handle the kill signal for your process.

I suppose your question lies in "how to properly handle the potential abrupt termination for those threads created by the process?". The following two answers might help you figure it out:

P.S.

I don't know why you need to create so many processes (100+ is really too much for nowadays average computer which has equal or less than 4 CPU cores), may be you can adopt a more appropriate high-level architecture for your scenario. :)

Community
  • 1
  • 1
jtuki
  • 385
  • 1
  • 5
  • 8