4

I am trying to do some computations using the multiprocessing module in python 2.7.2. My code is like this:

from multiprocessing import Pool
import sys
sys.setrecursionlimit(10000)
partitions = []
class Partitions:
    parts = {} #My goal is to use this dict to speed
               #up calculations in every process that
               #uses it, without having to build it up
               #from nothing each time
    def __init__(self):
        pass
    def p1(self, k, n):
        if (k,n) in Partitions.parts:
            return Partitions.parts[(k, n)]
        if k>n:
            return 0
        if k==n:
            return 1
        Partitions.parts[(k,n)] = self.p1(k+1, n) + self.p1(k, n-k)
        return Partitions.parts[(k,n)]

    def P(self, n):
        result = 0
        for k in xrange(1,n/2 + 1):
            result += self.p1(k, n-k)
        return 1 + result

p = Partitions()

def log(results):
    if results:
        partitions.extend(results)
    return None

def partWorker(start,stop):
    ps = []
    for n in xrange(start, stop):
        ps.append(((1,n), p.P(n)))
    return ps

def main():
    pool = Pool()
    step = 150
    for i in xrange(0,301,step):
        pool.apply_async(partWorker, (i, i+step), callback = log)

    pool.close()
    pool.join()

    return None

if __name__=="__main__":
    main()

I am new to this, I basically copied the format of the prime code on this page: python prime crunching: processing pool is slower? Can I get process running in each core all looking at the same dictionary to assist their calculations? The way it behaves now, each process creates it's own dictionaries and it eats up ram like crazy.

Community
  • 1
  • 1
Broseph
  • 1,655
  • 1
  • 18
  • 38

1 Answers1

1

I'm not sure if this is what you want ... but, take a look at multiprocessing.Manager ( http://docs.python.org/library/multiprocessing.html#sharing-state-between-processes ). Managers allow you to share a dict between processes.

mgilson
  • 300,191
  • 65
  • 633
  • 696