I wanted to try different ways of using multiprocessing
starting with this example:
$ cat multi_bad.py
import multiprocessing as mp
from time import sleep
from random import randint
def f(l, t):
# sleep(30)
return sum(x < t for x in l)
if __name__ == '__main__':
l = [randint(1, 1000) for _ in range(25000)]
t = [randint(1, 1000) for _ in range(4)]
# sleep(15)
pool = mp.Pool(processes=4)
result = pool.starmap_async(f, [(l, x) for x in t])
print(result.get())
Here, l
is a list that gets copied 4 times when 4 processes are spawned. To avoid that, the documentation page offers using queues, shared arrays or proxy objects created using multiprocessing.Manager
. For the last one, I changed the definition of l
:
$ diff multi_bad.py multi_good.py
10c10,11
< l = [randint(1, 1000) for _ in range(25000)]
---
> man = mp.Manager()
> l = man.list([randint(1, 1000) for _ in range(25000)])
The results still look correct, but the execution time has increased so dramatically that I think I'm doing something wrong:
$ time python multi_bad.py
[17867, 11103, 2021, 17918]
real 0m0.247s
user 0m0.183s
sys 0m0.010s
$ time python multi_good.py
[3609, 20277, 7799, 24262]
real 0m15.108s
user 0m28.092s
sys 0m6.320s
The docs do say that this way is slower than shared arrays, but this just feels wrong. I'm also not sure how I can profile this to get more information on what's going on. Am I missing something?
P.S. With shared arrays I get times below 0.25s.
P.P.S. This is on Linux and Python 3.3.