15

I have a fairly straightforward nested for loop that iterates over four arrays:

for a in a_grid:
    for b in b_grid:
        for c in c_grid:
            for d in d_grid:
                do_some_stuff(a,b,c,d)  # perform calculations and write to file

Maybe this isn't the most efficient way to perform calculations over a 4D grid to begin with. I know joblib is capable of parallelizing two nested for loops like this, but I'm having trouble generalizing it to four nested loops. Any ideas?

Community
  • 1
  • 1
ylangylang
  • 3,294
  • 11
  • 30
  • 34
  • 2
    have you tried the obvious? `Parallel(n_jobs=2)(delayed(do_some_stuff)(a, b, c, d) for a in a_grid for b in b_grid for c in c_grid for d in d_grid)`? – Hamms Feb 02 '17 at 22:55

3 Answers3

37

I usually use code of this form:

#!/usr/bin/env python3
import itertools
import multiprocessing

#Generate values for each parameter
a = range(10)
b = range(10)
c = range(10)
d = range(10)

#Generate a list of tuples where each tuple is a combination of parameters.
#The list will contain all possible combinations of parameters.
paramlist = list(itertools.product(a,b,c,d))

#A function which will process a tuple of parameters
def func(params):
  a = params[0]
  b = params[1]
  c = params[2]
  d = params[3]
  return a*b*c*d

#Generate processes equal to the number of cores
pool = multiprocessing.Pool()

#Distribute the parameter sets evenly across the cores
res  = pool.map(func,paramlist)
Richard
  • 56,349
  • 34
  • 180
  • 251
  • Is `paramlist = [a,b,c,d]` ? – godimedia Feb 06 '20 at 23:25
  • I ran this script, and ```paramlist``` looks like [(0, 0, 0, 0), (0, 0, 0, 1), (0, 0, 0, 2), (0, 0, 0, 3), (0, 0, 0, 4), (0, 0, 0, 5), (0, 0, 0, 6), (0, 0, 0, 7), (0, 0, 0, 8), (0, 0, 0, 9), (0, 0, 1, 0), (0, 0, 1, 1), (0, 0, 1, 2), (0, 0, 1, 3), (0, 0, 1, 4), (0, 0, 1, 5), (0, 0, 1, 6), (0, 0, 1, 7), (0, 0, 1, 8), (0, 0, 1, 9), (0, 0, 2, 0), (0, 0, 2, 1), (0, 0, 2, 2), (0, 0, 2, 3), (0, 0, 2, 4), ... (0, 9, 9, 6), (0, 9, 9, 7), (0, 9, 9, 8), (0, 9, 9, 9), ...] – Nesha25 May 18 '23 at 22:22
3

If you use a tool that makes it easy to parallelize two nested loops, but not four, you can use itertools.product to reduce four nested for loops into two:

from itertools import product

for a, b in product(a_grid, b_grid):
    for c, d in product(c_grid, d_grid):
        do_some_stuff(a, b, c, d)
user4815162342
  • 141,790
  • 18
  • 296
  • 355
  • 1
    Significant acceleration, that's true. However, it is not parallelization but optimization. Still consuming one core. – Tedo Vrbanec Oct 13 '18 at 02:28
  • @TedoVrbanec By parallelization I referred to iteration over both sequences at once, not in the sense of using two CPUs. Also note that using `itertools.product` is no optimization either, it's just a different way of expressing the iteration. – user4815162342 Oct 13 '18 at 09:02
2

The number of jobs is not related to the number of nested loops. In that other answer, it happened to be n_jobs=2 and 2 loops, but the two are completely unrelated.

Think of it this way: You have a bunch of function calls to make; in your case (unrolling the loops):

do_some_stuff(0,0,0,0)
do_some_stuff(0,0,0,1)
do_some_stuff(0,0,0,2)
do_some_stuff(0,0,1,0)
do_some_stuff(0,0,1,1)
do_some_stuff(0,0,1,2)
...

and you want to distribute those function calls across some number of jobs. You could use 2 jobs, or 10, or 100, it doesn't matter. Parallel takes care of distributing the work for you.

jwd
  • 10,837
  • 3
  • 43
  • 67
  • Right. I was mostly having trouble with structuring the code. I'm new to multiprocessing/joblib, so @Hamms's obvious solution somehow didn't come to mind. It does work though. – ylangylang Feb 03 '17 at 16:44