1

I had a previous related question on this problem: Parallelize for loop in python

I have a genetic algorithm which I'm trying to speed up by parallelizing the evaluation function. The GA is a class and the code looks something like this:

copy_reg.pickle(types.MethodType,lambda method: (getattr, (method.im_self, method.im_func.__name__)),getattr) 

class GA:

    ...
    ...

    def evaluation(self):
        self.scores = Pool(processes=nprocs).map(self.costly_function,self.population)

    def run(self):
        self.initial_population()
        self.evaluation()
        while self.Gen > i:
            self.select()
            self.crossover()
            self.mutation()
            self.evaluation()
            i += 1

This gives the right result compared to the sequential method but it is significantly slower. My guess is that this is because I'm creating a new group of process workers for each generation inside the while loop in the function evaluation. Is there a way of reusing the workers so I can achieve a speed up?

Community
  • 1
  • 1
Chicony
  • 393
  • 3
  • 17

1 Answers1

0

Solved the problem by adding the function below which I got from https://stackoverflow.com/a/25385582/4759898

def __getstate__(self):
    self_dict = self.__dict__.copy()
    del self_dict['pool']
    return self_dict

def __setstate__(self, state):
    self.__dict__.update(state)
Community
  • 1
  • 1
Chicony
  • 393
  • 3
  • 17