1

This question arises from this other question but you don't need to read there to understand this one.

I made the following test

import numpy
from multiprocessing.dummy import Pool
from multiprocessing import cpu_count
import time

def func(x):
    N = 400
    A = numpy.array([[i*j for i in range(0,N)] for j in range(0, N)])
    h = numpy.array([x for i in range(0, N)])
    y = numpy.dot(A, h.transpose())
    return y[-1]

def multiproc():
    print('Multiple processes')
    print(cpu_count())
    mypool = Pool(cpu_count())
    print(mypool.map(func, [i for i in range(0,100)]))


def multiproc2():
    print('Multiple processes 2')
    pool = Pool(cpu_count())
    res = numpy.empty(100)
    for i in range(0,100):
        res[i] = pool.apply_async(func, (i,)).get()
    pool.close()
    pool.join()
    print(res)

def singleproc():
    for i in range(0,100):
        print(func(i))
    print('Single process')

funcs = [multiproc, singleproc, multiproc2]

for i in range(0, 3):
    start_time = time.time()
    funcs[i]()
    print("%.6f seconds\n" % (time.time() - start_time))

I got the chance to read a bit about GIL, the difference between multithread and multiprocess. However, it surprised me that when I ran this tests in a 8 cpu computer I get pretty much the same timing.

Why is it that the multiprocess cases don't get to run significantly faster then the single-process one?

Community
  • 1
  • 1
myfirsttime1
  • 287
  • 2
  • 12
  • 1
    http://stackoverflow.com/questions/15639779/why-does-multiprocessing-use-only-a-single-core-after-i-import-numpy describes a similar issue related do Numpy – Hannu Nov 04 '16 at 15:46
  • @Hannu Oh, interesting! So, the reason is because it was still running on a single cpu. – myfirsttime1 Nov 04 '16 at 15:48
  • So I believe. If you remove Numpy function calls, don't import it and just do something else CPU intensive in your function, you get a slightly different result. – Hannu Nov 04 '16 at 15:53

0 Answers0