This maybe not full answer you, because of que output
part of your code, but could be a start point. Using the multiprocessing
module you can create a pool of workers and then with the subprocess
module you can call one instance of your script for each worker and check the output:
import multiprocessing as mp
import subprocess as sp
# an example with two runs
commands = ['python test.py', 'python test.py']
# pass the number of threads that will be working
# if the number of threads < len(commands) the exceed
# will run in sequence when some process terminate
pool = mp.Pool(processes=2)
# execute the script calls
res = pool.map(sp.check_output, commands)
print(*[item.decode() for item in res])
pool.close()
Attention: the return from check_output
is a byte string
, so you need to convert it back to string
I tested it with the following simple program:
import time
if __name__ == "__main__":
print("Running an instance at {}".format(time.ctime()))
time.sleep(2)
print("Finished at {}".format(time.ctime()))
And that is the output:
Running an instance at Thu Oct 11 23:21:44 2018
Finished at Thu Oct 11 23:21:46 2018
Running an instance at Thu Oct 11 23:21:44 2018
Finished at Thu Oct 11 23:21:46 2018
As you can see they runned at same time.