I have loaded an artificial intelligence into a class which then has a function where you can make a prediction, but if I run with multiple cores it will take longer than if I had to do it linearly as it takes a long time to transfer it to every single core. So is there a way that you don't have to transfer my class, but at the same time that every core can access it?
Right now I'm using Python's multiprocessing library to split up the processes like this:
import multiprocessing as mp
with mp.Pool() as pool:
output = pool.map(
parallel_process,
[(
link,
question,
48,
crawler.webcrawler,
self.translator,
self.question_answering
) for link in links]
)
def parallel_process(input):
# Getting the variables
link = inputs[0]
question = inputs[1]
n_sentences = inputs[2]
# Modules
webcrawler = inputs[3]
translator = inputs[4]
question_answering = inputs[5]
# The rest of the processes
# This is not necessary since it's the transition that takes a long time.
If you can either tell me how to do it in generel or give a code example in python, that would be awesome.