2

I implemented function with 4 for loops and it take a long time to compute, so i'm trying to speed this up by using multithreading.

My function looks like this:

def loops(start, end):  
    for h in range(start, end):
        for w in range(0, width):
            for h2 in range(h-radius, h+radius):
                for w2 in range(w-radius, w+radius):
                    compute_something()

with multithreading i tried this:

t1 = threading.Thread(target=loops, args=(0, 150))
t2 = threading.Thread(target=loops, args=(150, 300))
t1.start()
t2.start()
t1.join()
t2.join()

there is no change in computation time if i just used main thread with 0-300

i also used joblib multiprocessing like this:

inputs = range(300)
Parallel(n_jobs=core_num)(delayed(loops)(i) for i in inputs)

in this case computation time was even higher

am i doing something wrong or is there different way to spped up for loops by multithreading?
range here is just an example, size of loops is usually 2000*1800*6*6 a it takes +5mins to finish what i'm doing

mereth
  • 465
  • 3
  • 9
  • 19
  • Does this answer your question? [Multiprocessing a for loop?](https://stackoverflow.com/questions/20190668/multiprocessing-a-for-loop) – Lukashou-AGH Dec 09 '19 at 10:55

1 Answers1

4

You won't get any speedups in python using multi threading because of GIL. It's a mutex for interpreter. You need to use multiprocessing package. It's included in standard distribution.

from multiprocessing import Pool

pool = Pool()

Then just use map or starmap. You can find docs here. But first consider if you can vectorize your code using numpy, it'd be faster that way.

Piotr Rarus
  • 884
  • 8
  • 16