I am running a parallel test with Python 3.7 and Appium 1.15.1 on real Android smartphones.
I use concurrent.futures.ProcessPoolExecutor to run each test on each smartphone.
I am passing list of the uid of smartphones to my map function. By this way, my method 'run_smartphone()'(which start the test) get the uid of smartphone and identify on which smartphone it must run the test.
My script work fine without any issue. But I would like to add a "lock" because 'run_smartphone()' make some I/O on a sqlite3 database. So correct me if I am wrong, but it would be a good practice to "lock" the I/O operation on this sqlite3 database?
Here is my original code which works:
def run_smartphone(p_udid):
#do the stuff
list_smartphones_connected = [41492968379078, 53519716736397]
with concurrent.futures.ProcessPoolExecutor() as executor:
try:
multiprocesses = executor.map(mymodules.run_smartphone, list_smartphones_connected)
except ValueError:
print(("Error multiprocesses"))
So I tried to add pass "lock" to my method 'run_smartphone()'. This is what I wrote:
m = multiprocessing.Manager()
lock = m.Lock()
list_arguments_smartphones = []
list_smartphones_connected = [41492968379078, 53519716736397]
for smartphone_connected in list_smartphones_connected:
list_arguments_smartphones.append([smartphone_connected, lock])
with concurrent.futures.ProcessPoolExecutor() as executor:
try:
multiprocesses = executor.map(mymodules.run_smartphone, list_arguments_smartphones)
except ValueError:
print(("Error multiprocesses"))
But it doesn't work and I don't get any exception raised. Pycharm stop the script :
Process finished with exit code 0
I have no idea what is stopping the script.
So I started to investigate by executing the script for 1 smartphone with this line:
multiprocesses = executor.map(mymodules.run_smartphone, [41492968379078,lock])
it gives the same result => The script stop, no automation start and I don't see any exception raised (Process finished with exit code 0).
As I wanted to know where exactly was the issue, I run the script with 'trace'.
py -m trace --trace myscript.py
But I don't understand anything, I don't see any error... You can see the output of this 'trace' command on a text file I uploaded on GitHub:
https://github.com/gauthierbuttez/public/blob/master/trace-log.txt
Does anyone have any idea how can I pass the "lock" to my concurrent.futures.ProcessPoolExecutor() ? And is it a good idea to do that?
Thanks.