0

I'm just getting aquanted with multiprocessing (as opposed to threading, this is supposed to leverage the multiple cores available). I'm using some dummy calculations to test speed:

def g(number):
    f = (math.factorial(number))
    f = f * 10

Then I try to call the function sequentially while measuring time:

start_time = time.time()  
g(10000)
g(2000)
g(3000)
print("--- %s seconds ---" % (time.time() - start_time))

This returns --- 0.003850698471069336 seconds --- Now when I'm trying multiprocessing either via

start_time_ = time.time()
p = Pool(3)
p.map(g, [10000,2000,3000])
print("--- %s seconds ---" % (time.time() - start_time_))

or

start_time_1 = time.time()
pool = Pool(processes=3)
pool.map(g, [10000,2000,3000])
print("--- %s seconds ---" % (time.time() - start_time_1))

I get a much worse running time, --- 0.0334630012512207 seconds --- and --- 0.6992270946502686 seconds ---.

Why is running the 3 calculations on 3 different cores/processes result in a bigger calculation time? Am I doing something wrong? Thank you!

learncode
  • 1,105
  • 4
  • 18
  • 36
lte__
  • 7,175
  • 25
  • 74
  • 131
  • 4
    As a rule of thumb, short computations execute slower in parallel because of the parallelization overhead, and the more processors, the slower. Try with something more substantial. –  Nov 02 '16 at 19:46
  • http://stackoverflow.com/questions/21414462/multicore-cpus-multithreading-and-context-switching – Jared Smith Nov 02 '16 at 19:50
  • @YvesDaoust Thank you! Just adding a few zeroes to the paramters did the trick :) Btw. what's the difference between launching a `Pool` and mapping the functions and launching individual `Process`-es? Is one more efficient than the other? If I understand it correctly, `Pool` also assigns the calculations to individual processes/cores, right? – lte__ Nov 02 '16 at 20:07

0 Answers0