0

My python script, which is essentially like the example code below, fails to use two processors for two independent processes create via the multiprocessing module.

Two processes are created, executed, and joint, but all happens on a single core.

I have trouble in understanding what is the problem, since on a different machine everything is as expected. multiprocessing.cpu_count() returns 4 on either machine.

So my question is a bit vague: What might be the reason, that the processes do not use all available processors.

Example code:

import numpy as np
import multiprocessing

n = 10
mat = np.random.randn(n, n)
vec = np.ones((n, 1))


def compprod():
    print np.dot(mat, vec)
    return


def compsum(times):
    print times*vec
    return

p1 = multiprocessing.Process(target=compprod)
p2 = multiprocessing.Process(target=compsum, args=(n, ))

p1.start()
p2.start()

p1.join()
p2.join()

print 'done'
Jan
  • 4,932
  • 1
  • 26
  • 30
  • 1
    If the the target functions (e.g. `compprod`, `compsum`) end quickly, the both processes may end up running on the same core. – unutbu Jul 02 '14 at 13:20
  • Thanks for the hint. But this is probably not the case in my example. As the function take hours to terminate. In fact I really can observe two parallel processes but on the same core. Did I get you right? – Jan Jul 02 '14 at 13:49
  • 1
    See this question: http://stackoverflow.com/questions/15414027/multiprocessing-pool-makes-numpy-matrix-multiplication-slower. Numpy changes CPU affinity when it's imported, and that can lead to it being stuck running on a single CPU, even with `multiprocessing`. The link provided shows you how you can get around this. I can't say for certain that's what you're hitting, but its worth a try. – dano Jul 02 '14 at 14:39

0 Answers0