71

I'm trying to make a non blocking subprocess call to run a slave.py script from my main.py program. I need to pass args from main.py to slave.py once when it(slave.py) is first started via subprocess.call after this slave.py runs for a period of time then exits.

main.py
for insert, (list) in enumerate(list, start =1):

    sys.args = [list]
    subprocess.call(["python", "slave.py", sys.args], shell = True)


{loop through program and do more stuff..}

And my slave script

slave.py
print sys.args
while True:
    {do stuff with args in loop till finished}
    time.sleep(30)

Currently, slave.py blocks main.py from running the rest of its tasks, I simply want slave.py to be independent of main.py, once I've passed args to it. The two scripts no longer need to communicate.

I've found a few posts on the net about non blocking subprocess.call but most of them are centered on requiring communication with slave.py at some-point which I currently do not need. Would anyone know how to implement this in a simple fashion...?

DavidJB
  • 2,272
  • 12
  • 30
  • 37

5 Answers5

76

You should use subprocess.Popen instead of subprocess.call.

Something like:

subprocess.Popen(["python", "slave.py"] + sys.argv[1:])

From the docs on subprocess.call:

Run the command described by args. Wait for command to complete, then return the returncode attribute.

(Also don't use a list to pass in the arguments if you're going to use shell = True).


Here's a MCVE1 example that demonstrates a non-blocking suprocess call:

import subprocess
import time

p = subprocess.Popen(['sleep', '5'])

while p.poll() is None:
    print('Still sleeping')
    time.sleep(1)

print('Not sleeping any longer.  Exited with returncode %d' % p.returncode)

An alternative approach that relies on more recent changes to the python language to allow for co-routine based parallelism is:

# python3.5 required but could be modified to work with python3.4.
import asyncio

async def do_subprocess():
    print('Subprocess sleeping')
    proc = await asyncio.create_subprocess_exec('sleep', '5')
    returncode = await proc.wait()
    print('Subprocess done sleeping.  Return code = %d' % returncode)

async def sleep_report(number):
    for i in range(number + 1):
        print('Slept for %d seconds' % i)
        await asyncio.sleep(1)

loop = asyncio.get_event_loop()

tasks = [
    asyncio.ensure_future(do_subprocess()),
    asyncio.ensure_future(sleep_report(5)),
]

loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

1Tested on OS-X using python2.7 & python3.6

Dan D.
  • 73,243
  • 15
  • 104
  • 123
mgilson
  • 300,191
  • 65
  • 633
  • 696
  • 3
    Thanks, this appears to work, however when I include a While loop in slave.py it seems get stuck and not perform anything in the loop (even with a timer.sleep() function..? – DavidJB Apr 17 '13 at 23:59
  • @mgilson : Can you please share a general example on how to use it ?. I mean how the control flow should look like when using it in a non blocking way. I appreciate it. – ViFI Nov 18 '16 at 15:48
  • 1
    @ViFI -- Sure, I've added an example of using `Popen` in a non-blocking way. – mgilson Nov 18 '16 at 16:08
  • Is there a way to start multiple processes asynchronously like `p1 & p2 &` and then wait on all of them using `asyncio`? I'm hoping this does not require the multiprocessing module. – CMCDragonkai Jan 07 '20 at 03:58
  • Sure, you'd just need a `do_p1` function and a `do_p2` function and you'd add both of them to the `tasks` list. – mgilson Jan 07 '20 at 06:29
  • What's the difference between `subprocess.Popen` and the alternative approach using `asyncio`? – Stevoisiak Jun 02 '21 at 14:45
29

There's three levels of thoroughness here.

As mgilson says, if you just swap out subprocess.call for subprocess.Popen, keeping everything else the same, then main.py will not wait for slave.py to finish before it continues. That may be enough by itself. If you care about zombie processes hanging around, you should save the object returned from subprocess.Popen and at some later point call its wait method. (The zombies will automatically go away when main.py exits, so this is only a serious problem if main.py runs for a very long time and/or might create many subprocesses.) And finally, if you don't want a zombie but you also don't want to decide where to do the waiting (this might be appropriate if both processes run for a long and unpredictable time afterward), use the python-daemon library to have the slave disassociate itself from the master -- in that case you can continue using subprocess.call in the master.

zwol
  • 135,547
  • 38
  • 252
  • 361
4

For Python 3.8.x

import shlex
import subprocess

cmd = "<full filepath plus arguments of child process>"
cmds = shlex.split(cmd)
p = subprocess.Popen(cmds, start_new_session=True)

This will allow the parent process to exit while the child process continues to run. Not sure about zombies.

Tested on Python 3.8.1 on macOS 10.15.5

JS.
  • 14,781
  • 13
  • 63
  • 75
2

The easiest solution for your non-blocking situation would be to add & at the end of the Popen like this:

subprocess.Popen(["python", "slave.py", " &"])

This does not block the execution of the rest of the program.

0

If you want to start a function several times with different arguments in a non-blocking way, you can use the ThreadPoolExecuter.

You submit your function calls to the executer like this

from concurrent.futures import ThreadPoolExecutor

def threadmap(fun, xs):
    with ThreadPoolExecutor(max_workers=8) as executer:
        return list(executer.map(fun, xs))
0-_-0
  • 1,313
  • 15
  • 15