I'm relatively new to Python3 and am starting to dabble in multiprocessing. I have searched for answers regarding my question, and have seen the starmap solution, but I'm not sure if it will work for me.
My projects consists of several .py
files. Each one connects to a device and gathers stats, builds an HTML table, and updates my website with the latest data. In order to launch/control these processes, I have a file main.py
that handles the multiprocessing:
from multiprocessing import Pool
import os
import time
processes = ('1.py', '2.py', '3.py', '4.py')
def run_process(process):
os.system('sudo python3 {}'.format(process))
pool = Pool(processes=4)
while True:
pool.map(run_process, processes)
time.sleep(60)
Note: I'm using sudo so that python can read a password from a root-owned file that is used for SSH connections in each of the processes. Also note: each of my "processes" files (ie: 1.py
) can be ran independently of main.py
, and in no way needs to communicate back to main.py
or with any of the other processes.
My question is: how can I pass a variable to each of the processes? I want each process to know how many times it has ran so I can run a specific function once every X number of times.
I have no idea how to accomplish this, and this is actually the first time I am completely stuck since I began learning Python about 4 months ago. I can't simply place a counter in each of the #.py
files because of how I am executing them (as in, they run, and they terminate completely when they're done - in other words, from the perspective of #.py
, every time it runs, it is the first time running). The below code may not be valid python code, but I imagine it would look something like this (in an effort to illustrate what I want to do):
from multiprocessing import Pool
import os
import time
processes = ('1.py', '2.py', '3.py', '4.py')
def run_process(process):
os.system('sudo python3 {}'.format(process))
pool = Pool(processes=4)
run_counter = 0
while True:
pool.map(run_process, processes, run_counter)
time.sleep(60)
run_counter += 1
To further demonstrate my point, for example, 1.py
would accept the run_counter variable and have some logic that says "if this is the first time I'm running, do foo(); otherwise, skip it" - if this is the 10th/20th/30th/etc time I'm running, do foo(). This is an effort to further optimize my program because it's doing a lot of excess work that isn't absolutely necessary with each iteration.
Thanks!
Update 1:
I've been playing around with passing a global variable to try to get it to work, but I'm running into an issue with this possible solution. My code example:
from multiprocessing import Pool
import os
import time
processes = ('1.py', '2.py', '3.py', '4.py')
run_counter = 1
def run_process(process):
global run_counter
os.system('sudo python3 {0} {1}'.format(process, run_counter))
pool = Pool(processes=4)
while True:
pool.map(run_process, processes)
time.sleep(60)
run_counter += 1
print("Updating run counter to {}".format(run_counter))
And in my file 1.py
I have the following:
import sys
run_counter = sys.argv[1]
print("Run counter from 1.py is {}".format(run_counter))
When I execute main.py, I see the following:
Run counter from 1.py is 1
Updating run counter to 2
Run counter from 1.py is 1
Updating run counter to 3
Run counter from 1.py is 1
I'm aware that changing a global variable requires the global myVar
statement before the change and I've tried that too - both ways produce the same output.