0

I have several types of different computationally expensive tasks (hours on average core). For instance let it be 1) sorting a long sequence of numbers and 2) chemical computation. I have functions for both these task (scripts in python), and these functions operate on 1 core.

I would like a pool of tasks, which I can update at any moment (add a new task). If I have an available core, the required tool (may be, master process) should take the task from the pool and compute it, using the existing function. If there are no tasks in the pool - the tool should wait for its arriving.

To emphasize, the important requirement is to be able to first run the parallelization script, and then add tasks.

That would also be cool, but not neccessary, if I could assign priorities to tasks.

I assume, that MPI can help here, but I do not know it quite well. That would be great if you provide a link for the tutorial on a similar problem or write an example =)

0 Answers0