33

So I knocked up some test code to see how the multiprocessing module would scale on cpu bound work compared to threading. On linux I get the performance increase that I'd expect:

linux (dual quad core xeon):
serialrun took 1192.319 ms
parallelrun took 346.727 ms
threadedrun took 2108.172 ms

My dual core macbook pro shows the same behavior:

osx (dual core macbook pro)
serialrun took 2026.995 ms
parallelrun took 1288.723 ms
threadedrun took 5314.822 ms

I then went and tried it on a windows machine and got some very different results.

windows (i7 920):
serialrun took 1043.000 ms
parallelrun took 3237.000 ms
threadedrun took 2343.000 ms

Why oh why, is the multiprocessing approach so much slower on windows?

Here's the test code:

#!/usr/bin/env python

import multiprocessing
import threading
import time

def print_timing(func):
    def wrapper(*arg):
        t1 = time.time()
        res = func(*arg)
        t2 = time.time()
        print '%s took %0.3f ms' % (func.func_name, (t2-t1)*1000.0)
        return res
    return wrapper


def counter():
    for i in xrange(1000000):
        pass

@print_timing
def serialrun(x):
    for i in xrange(x):
        counter()

@print_timing
def parallelrun(x):
    proclist = []
    for i in xrange(x):
        p = multiprocessing.Process(target=counter)
        proclist.append(p)
        p.start()

    for i in proclist:
        i.join()

@print_timing
def threadedrun(x):
    threadlist = []
    for i in xrange(x):
        t = threading.Thread(target=counter)
        threadlist.append(t)
        t.start()

    for i in threadlist:
        i.join()

def main():
    serialrun(50)
    parallelrun(50)
    threadedrun(50)

if __name__ == '__main__':
    main()
manghole
  • 333
  • 1
  • 3
  • 6
  • 3
    I ran your test code on a quad core Dell PowerEdge 840 running Win2K3, and the results weren't as dramatic as yours, but your point remains valid: serialrun took 1266.000 ms parallelrun took 1906.000 ms threadedrun took 4359.000 ms I'll be interested to see what answers you get. I don't know myself. – Jeff Aug 17 '09 at 19:16

5 Answers5

27

The python documentation for multiprocessing blames the lack of os.fork() for the problems in Windows. It may be applicable here.

See what happens when you import psyco. First, easy_install it:

C:\Users\hughdbrown>\Python26\scripts\easy_install.exe psyco
Searching for psyco
Best match: psyco 1.6
Adding psyco 1.6 to easy-install.pth file

Using c:\python26\lib\site-packages
Processing dependencies for psyco
Finished processing dependencies for psyco

Add this to the top of your python script:

import psyco
psyco.full()

I get these results without:

serialrun took 1191.000 ms
parallelrun took 3738.000 ms
threadedrun took 2728.000 ms

I get these results with:

serialrun took 43.000 ms
parallelrun took 3650.000 ms
threadedrun took 265.000 ms

Parallel is still slow, but the others burn rubber.

Edit: also, try it with the multiprocessing pool. (This is my first time trying this and it is so fast, I figure I must be missing something.)

@print_timing
def parallelpoolrun(reps):
    pool = multiprocessing.Pool(processes=4)
    result = pool.apply_async(counter, (reps,))

Results:

C:\Users\hughdbrown\Documents\python\StackOverflow>python  1289813.py
serialrun took 57.000 ms
parallelrun took 3716.000 ms
parallelpoolrun took 128.000 ms
threadedrun took 58.000 ms
hughdbrown
  • 47,733
  • 20
  • 85
  • 108
  • Very neat! Lowering the number of iterations (processes) while raising the count-to value shows that, as Byron told, that the parrallel slowness comes from the added setup time of Windows Processes. – manghole Aug 17 '09 at 19:51
  • The Pool does not seem to wait for itself to complete, there is a join() method for Pool but it doesn't seem to do what I think it should do :P. – manghole Aug 17 '09 at 20:07
25

Processes are much more lightweight under UNIX variants. Windows processes are heavy and take much more time to start up. Threads are the recommended way of doing multiprocessing on windows.

Byron Whitlock
  • 52,691
  • 28
  • 123
  • 168
  • Oh interesting, then would that mean that a change to the balance of the test, say counting higher but fewer times, would let Windows reclaim some multiprocessing performance? I shall give it a go. – manghole Aug 17 '09 at 19:20
  • 2
    Tried recalibrating to counting to 10.000.000 and 8 iterations and the results are more in Windows' favor:
    serialrun took 1651.000 ms
    parallelrun took 696.000 ms
    threadedrun took 3665.000 ms
    – manghole Aug 17 '09 at 19:31
5

It's been said that creating processes on Windows is more expensive than on linux. If you search around the site you will find some information. Here's one I found easily.

Community
  • 1
  • 1
Duck
  • 26,924
  • 5
  • 64
  • 92
3

Just starting the pool takes a long time. I have found in 'real world' programs if I can keep a pool open and reuse it for many different processes,passing the reference down through method calls (usually using map.async) then on Linux I can save a few percent but on Windows I can often halve the time taken. Linux is always quicker for my particular problems but even on Windows I get net benefits from multiprocessing.

Paul Wells
  • 161
  • 2
  • 8
1

Currently, your counter() function is not modifying much state. Try changing counter() so that it modifies many pages of memory. Then run a cpu bound loop. See if there is still a large disparity between linux and windows.

I'm not running python 2.6 right now, so I can't try it myself.

Karl Voigtland
  • 7,637
  • 34
  • 29