1

Is there a way to have multithreading implemented for multiple for loops under a single function. I am aware that it can be achieved if we have separate functions, but is it possible to have it under the same function. For example:

def sqImport():
    for i in (0,50):
        do something specific to 0-49
    for i in (50,100):
        do something specific to 50-99
    for i in (100,150):
        do something specific to 100-149

If there are 3 separate functions for 3 different for loops then we can do:

threadA = Thread(target = loopA)
threadB = Thread(target = loopB)
threadC = Thread(target = loopC)
threadA.run()
threadB.run()
threadC.run()
# Do work indepedent of loopA and loopB 
threadA.join()
threadB.join()
threadC.join()

But is there a way to achieve this under a single function?

Austin
  • 135
  • 4
  • 17

2 Answers2

1

First of all: I think you really should take a look at multiprocessing.ThreadPool if you are going to use it in a productive system. What I describe below is just a possible workaround (which might be simpler and therefore could be used for testing purposes).

You could pass an id to the function and use that to decide which loop you take like so:

from threading import Thread

def sqImport(tId):
    if tId == 0:
        for i in range(0,50):
            print i
    elif tId == 1:
        for i in range(50,100):
            print i
    elif tId == 2:
        for i in range(100,150):
            print i

threadA = Thread(target = sqImport, args=[0])
threadB = Thread(target = sqImport, args=[1])
threadC = Thread(target = sqImport, args=[2])
threadA.start()
threadB.start()
threadC.start()
# Do work indepedent of loopA and loopB 
threadA.join()
threadB.join()
threadC.join()

Note that I used start() instead of run() because run() does not start a different thread but executes in the current thread context. Moreover I changed your for i in (x, y) loops in for i in range(x,y) loops, because I think, You want to iterate over a range and not a tuple(that would iterate only over x and y).


An alternative Solution using multiprocessing might look like this:

from multiprocessing.dummy import Pool as ThreadPool

# The worker function
def sqImport(data):
    for i in data:
        print i


# The three ranges for the three different threads
ranges = [
    range(0, 50),
    range(50, 100),
    range(100, 150)
    ]

# Create a threadpool with 3 threads
pool = ThreadPool(3)
# Run sqImport() on all ranges
pool.map(sqImport, ranges)

pool.close()
pool.join()
exilit
  • 1,156
  • 11
  • 23
  • Thanks @exilit. Will try that and check. – Austin May 17 '17 at 07:47
  • @ exilit, i have different connections in print for different for loops. For ex: print 'port1', 'port2', 'port3' respectively. For for 1st range, it should take print 'port1' and respectively. I doubt that will work. – Austin May 17 '17 at 16:54
0

You can use multiprocessing.ThreadPool which will divide you tasks equally between running threads. Follow Threading pool similar to the multiprocessing Pool? for more on this.

If you are really looking for parallel execution then go for processes because threads will face python GIL(Global Interpreted Lock).

Community
  • 1
  • 1
Hitul
  • 363
  • 1
  • 5
  • 11