2

Running Python 3.6.4 (64 bit) on Windows 10, AMD Threadripper 16 core CPU, 64 GB RAM, fast SSD. Nothing else is running (<2% CPU) or using RAM (55GB Free) before I start this test. Everything runs fast except calling mp.Pool() to setup the worker processes. Once setup, running pool.map() is fast as expected. Notes: Running local and tested with and w/o virtual env.

Any fixes, workarounds, ideas or explanations would be appreciated. Thanks.

import multiprocessing as mp
import time

for x in range(2,15):
    t0 = time.perf_counter()
    with mp.Pool(processes=x) as pool:
        pass
    print('Done {0} processes in {1:.2f}s'.format(x, time.perf_counter() - t0))

Done 2 processes in 0.79s
Done 3 processes in 1.34s
Done 4 processes in 2.18s
...
Done 12 processes in 6.44s
Done 13 processes in 5.45s
Done 14 processes in 5.73s
ely
  • 74,674
  • 34
  • 147
  • 228
AndyP1970
  • 205
  • 1
  • 12
  • 1
    Weird, I get a consistent `0.11s`. But that's on linux, so perhaps it's a platform difference. – Azsgy Apr 13 '18 at 13:05
  • @Azsgy I see the exact same result you listed, also using Linux. Can't reproduce the OP's timing results. – ely Apr 13 '18 at 13:10
  • 1
    [This discussion](https://stackoverflow.com/a/38236445/567620) about 'spawn' vs. 'fork' for new processes in Windows vs. Linux is probably relevant. – ely Apr 13 '18 at 13:14

1 Answers1

1

After several hours, I determined it was a Bitdefender setting that was causing the problem. Under antivirus there is a setting called, "Scan only new and changed files." If set to Off, the performance problem happens. Turn it on and the timings go down to 0.11s per 5 processes. For documentation purposes, I am running Bitdefender AV Plus 2018 (up to date as of today). I reported the problem to Bitdefender and it was escalated to the next tier of support.

Thanks everyone for the input.

AndyP1970
  • 205
  • 1
  • 12