There is a race which is revealed by the call to time.sleep
because without it one of the workers finishes the task before any of the rest can even start. This is made clear if you use a longer list, for example:
from multiprocessing.pool import Pool
from time import sleep
MY_LIST = []
def worker(j):
global MY_LIST
sleep(1)
MY_LIST.append(j)
print(j, len(MY_LIST))
if __name__ == '__main__':
parameters = range(25)
with Pool(processes=2) as pool:
results = pool.map(worker, parameters)
Will output something like
1 1
1 2
1 3
1 4
1 5
1 6
1 7
1 8
1 9
1 10
1 11
1 12
1 13
1 14
1 15
1 16
1 17
1 18
1 19
1 20
1 21
1 1
1 2
1 3
1 4
Of course that's not the only problem. The other issue is that global variables are not shared between processes, at least not the primitive python types. You need to use a multiprocessing.Manager.List
or similar, something like this:
from multiprocessing import Pool, Lock, Manager
from time import sleep
manager = Manager()
MY_LIST = manager.list()
def worker(j):
global MY_LIST, lock
MY_LIST.append(j)
print(j, len(MY_LIST))
if __name__ == '__main__':
parameters = range(25)
with Pool(2) as pool:
results = pool.map(worker, parameters)
Which will output
4 1
5 2
6 3
7 4
8 5
9 6
10 7
11 8
12 10
13 11
0 10
1 13
14 12
2 15
15 15
3 16
16 18
17 19
18 20
20 20
19 22
24 23
21 23
22 24
23 25
Of course this does nothing to guarantee order, only access.