I am using multiprocessing in Python with:
import multiprocessing as mp
all_arguments = range(0,20)
pool = mp.Pool(processes=7)
all_items = [pool.apply_async(main_multiprocessing_function, args=(argument_value,)) for argument_value in all_arguments]
for item in all_items:
item.get()
In the above, as far as i am aware, after a worker processor finishes, it moves on to the next value. Is there any way instead to force a 'new' worker processor to be initialed from scratch each time, rather than to re-use the old one?
[Specifically, main_multiprocessing_function
calls multiple other functions that each using caching to speed up the processing within each task. All those caches are however redundant for the next item to be processed, and thus am interested in a way of resetting everything back to being fresh].