615

The function foo below returns a string 'foo'. How can I get the value 'foo' which is returned from the thread's target?

from threading import Thread

def foo(bar):
    print('hello {}'.format(bar))
    return 'foo'
    
thread = Thread(target=foo, args=('world!',))
thread.start()
return_value = thread.join()

The "one obvious way to do it", shown above, doesn't work: thread.join() returned None.

Super Kai - Kazuya Ito
  • 22,221
  • 10
  • 124
  • 129
wim
  • 338,267
  • 99
  • 616
  • 750

27 Answers27

447

One way I've seen is to pass a mutable object, such as a list or a dictionary, to the thread's constructor, along with a an index or other identifier of some sort. The thread can then store its results in its dedicated slot in that object. For example:

def foo(bar, result, index):
    print 'hello {0}'.format(bar)
    result[index] = "foo"

from threading import Thread

threads = [None] * 10
results = [None] * 10

for i in range(len(threads)):
    threads[i] = Thread(target=foo, args=('world!', results, i))
    threads[i].start()

# do some other stuff

for i in range(len(threads)):
    threads[i].join()

print " ".join(results)  # what sound does a metasyntactic locomotive make?

If you really want join() to return the return value of the called function, you can do this with a Thread subclass like the following:

from threading import Thread

def foo(bar):
    print 'hello {0}'.format(bar)
    return "foo"

class ThreadWithReturnValue(Thread):
    def __init__(self, group=None, target=None, name=None,
                 args=(), kwargs={}, Verbose=None):
        Thread.__init__(self, group, target, name, args, kwargs, Verbose)
        self._return = None
    def run(self):
        if self._Thread__target is not None:
            self._return = self._Thread__target(*self._Thread__args,
                                                **self._Thread__kwargs)
    def join(self):
        Thread.join(self)
        return self._return

twrv = ThreadWithReturnValue(target=foo, args=('world!',))

twrv.start()
print twrv.join()   # prints foo

That gets a little hairy because of some name mangling, and it accesses "private" data structures that are specific to Thread implementation... but it works.

For Python 3:

class ThreadWithReturnValue(Thread):
    
    def __init__(self, group=None, target=None, name=None,
                 args=(), kwargs={}, Verbose=None):
        Thread.__init__(self, group, target, name, args, kwargs)
        self._return = None

    def run(self):
        if self._target is not None:
            self._return = self._target(*self._args,
                                                **self._kwargs)
    def join(self, *args):
        Thread.join(self, *args)
        return self._return
kindall
  • 178,883
  • 35
  • 278
  • 309
  • thanks, i can see that that would be fine as a workaround, but it changes the function definition so that it doesn't really `return` anything. i wanted to know in my original case, where _does_ that 'foo' actually go...? – wim Aug 02 '11 at 03:53
  • @wim: Return values go somewhere only if you put them somewhere. Unfortunately, in the case of `Thread`, all that happens inside the class -- the default `run()` method does not store off the return value, so you lose it. You could write your own `Thread` subclass to handle this, though. I've taken a whack at it in my message. – kindall Aug 03 '11 at 01:20
  • 69
    cool, thanks for the example! i wonder why Thread was not implemented with handling a return value in the first place, it seems like an obvious enough thing to support. – wim Aug 03 '11 at 01:28
  • 34
    I think this should be the accepted answer - the OP asked for `threading`, not a different library to try, plus the pool size limitation introduces an additional potential problem, which happened in my case. – domoarigato Jan 31 '15 at 20:05
  • 13
    On python3 this returns ``TypeError: __init__() takes from 1 to 6 positional arguments but 7 were given`` . Any way to fix that? – GuySoft Oct 30 '16 at 15:20
  • 2
    `join` has a timeout parameter that should be passed along – Teivaz Aug 22 '18 at 10:44
  • 3
    Warning for anyone tempted to do the second of these (the `_Thread__target` thing). You will make anyone trying to port your code to python 3 hate you until they work out what you've done (because of using undocumented features that changed between 2 and 3). Document your code well. – Ben Taylor Nov 20 '19 at 09:06
  • @GuySoft I took a look into help(Thread) and found that the Verbose can't be a positional argument -- we need to use `Thread.__init__(self, group, target, name, args, kwargs)` – Seth Jan 27 '20 at 21:49
  • @Seth You commented not on my answer, its a long thread and a mess and took me a while to hunt it myself. The line in my answer is ``Thread.__init__(self, group, target, name, args, kwargs, daemon=daemon)``. It might have changed haven't tested in a while. – GuySoft Jan 28 '20 at 15:21
  • Could using a mutable (like args=(), kwargs={}, Verbose=None) instead of like (...**args, **kwargs)) in the wrapper cause issues if you are making simultaneous calls to ThreadWithReturnValue? – Lars Jun 06 '20 at 02:34
  • this does not destroys the thread from the memory – Shivam Pandey Apr 15 '21 at 11:53
  • 2
    for the first example, instead of `results = [None] * 10`, modify to pass a list of lists so that each thread has a bucket to dump to: `threads=5` `results = []` `[results.append([]) for i in range(threads)]` – grantr Sep 13 '21 at 13:04
357

FWIW, the multiprocessing module has a nice interface for this using the Pool class. And if you want to stick with threads rather than processes, you can just use the multiprocessing.pool.ThreadPool class as a drop-in replacement.

def foo(bar, baz):
  print 'hello {0}'.format(bar)
  return 'foo' + baz

from multiprocessing.pool import ThreadPool
pool = ThreadPool(processes=1)

async_result = pool.apply_async(foo, ('world', 'foo')) # tuple of args for foo

# do some other stuff in the main process

return_val = async_result.get()  # get the return value from your function.
wim
  • 338,267
  • 99
  • 616
  • 750
Jake Biesinger
  • 5,538
  • 2
  • 23
  • 25
  • this is the best answer as it's non-intrusive – Jayen Dec 04 '13 at 04:28
  • It should be noted that in python 2.7 you can't pass objects in the args tuple. (just checked, can't do it in 3.2 either) – CornSmith May 06 '14 at 00:00
  • 1
    @CornSmith not sure what you mean... works fine for me in python 2.7.5... Just don't forget to wrap the arguments in a tuple ```(object1,)```: ```def foo2(testinstance): print 'hello {0}'.format(testinstance.bar) class testme(object): def __init__(self, bar): self.bar = bar async_result = pool.apply_async(foo2, (testme('I am bar'),))``` – Jake Biesinger May 06 '14 at 04:11
  • Well I was trying to pass a `multiprocessing.Listener()` instance. I should rephrase it that you can't pass anything that can't be pickled. I ended up just using normal threading from kindall's second answer. – CornSmith May 07 '14 at 05:27
  • Gotcha. I didn't realize the ThreadPool pickles across thread boundaries, but that's certainly how the ProcessPool works. – Jake Biesinger May 07 '14 at 16:03
  • I don't understand why it's accepted answer. Question was 'how to get result from thread'. Replacing module threading with multiprocessing replaces threads with processes. I hope you know difference between threads and processes? It's like killing fly from bazooka. – omikron Apr 18 '15 at 14:09
  • @omikron The *ThreadPool* doesn't spawn a new process, but has the very nice multiprocess.Pool interface. Besides, in Python, multithreading is borked (in the sense that only one thread can run at a time) In my experience, you just end up using processes for any task w/parallel execution. – Jake Biesinger Apr 18 '15 at 19:07
  • 63
    @JakeBiesinger My point is, that I was looking for answer, how to get response from Thread, came here, and accepted answer doesn't answer question stated. I differantiate threads and processes. I know about Global Interpreter Lock however I'm working on I/O bound problem so Threads are ok, I don't need processes. Other answers here better answer question stated. – omikron Apr 19 '15 at 09:17
  • 16
    @omikron But threads in python don't return a response unless you use a subclass that enables this functionality. Of possible subclasses, ThreadPools are a great choice (choose # of threads, use map/apply w/sync/async). Despite being imported from `multiprocess`, they have nothing to do with Processes. – Jake Biesinger Apr 20 '15 at 02:17
  • 9
    @JakeBiesinger Oh, I'm blind. Sorry for my unnecessary comments. You are right. I just assumed that multiprocessing = processes. – omikron Apr 20 '15 at 09:25
  • 17
    Don't forget to set `processes=1` to more than one if you have more threads! – iman Jun 16 '15 at 11:46
  • 2
    This method has a lot of overhead! It starts 3 managing Threads, does a lot of bureaucracy and imports piles of modules. Better simply subclass or wrap `multithreading.Thread()` and define your new run() method to save the result as ´self.ret = ...` or so. – kxr Aug 17 '16 at 11:03
  • 1
    Do not use ThreadPool. It cannot handle ctrl-C termination signal and also does not capture stack trace when it fails. Use Thread instead. – max Feb 06 '17 at 20:32
  • 1
    it is way slower with tensorflow prediction. I tried running prediction task with this and it is way slower –  Jul 31 '17 at 06:57
  • 1
    In my opinion this answer is also relevant to the question here: https://stackoverflow.com/a/26104609/1971003 – Guy Avraham Nov 17 '17 at 22:09
  • Ugh, ThreadPool is broken in 3.4, it won't finish all work if one thread throws an exception. Beware if you run multiple versions of Python. – Bouke Dec 02 '17 at 14:34
  • 6
    The problem with multiprocessing and the thread pool is that it much slower to setup and start threads compared to the basic threading library. It's great for starting long running threads but defeat the purpose when needing to start a lot of short running threads. The solution of using "threading" and "Queue" documented in other answers here is a better alternative for that latter use case in my opinion. – Yves Dorfsman Jan 08 '18 at 14:41
  • Posted my simplified solution for saving the result below but it's pretty far down, so commenting here for visibility: stackoverflow.com/a/65447493/9983575 - This approach doesn't depend on any other modules (uses a closure function inside a `threading.Thread` subclass instead), so a lot of the concerns discussed in the comments here aren't a problem :) – slow-but-steady Feb 25 '21 at 05:44
293

In Python 3.2+, stdlib concurrent.futures module provides a higher level API to threading, including passing return values or exceptions from a worker thread back to the main thread:

import concurrent.futures

def foo(bar):
    print('hello {}'.format(bar))
    return 'foo'

with concurrent.futures.ThreadPoolExecutor() as executor:
    future = executor.submit(foo, 'world!')
    return_value = future.result()
    print(return_value)
wim
  • 338,267
  • 99
  • 616
  • 750
Ramarao Amara
  • 3,106
  • 1
  • 8
  • 5
  • 111
    For those wondering this can be done with a list of threads. `futures = [executor.submit(foo, param) for param in param_list]` The order will be maintained, and exiting the `with` will allow result collection. `[f.result() for f in futures] ` – jayreed1 Jun 04 '20 at 21:29
  • 17
    @jayreed1 that comment deserves an answer of its own or it should be included in the answer. Very useful. – Damien Aug 05 '20 at 10:39
  • Wow.. thanks for the answer, was searching for multiprocessing solution for my code, but this helps me to do it in so simple way and @jayreed1 comment made it cherry on the cake, thank you all... – DevPy Apr 27 '21 at 13:22
  • Thank you very much, this helped me to fix an issue I found in some non-thread-safe libs. I liked your answer from there. My Q&A: https://stackoverflow.com/questions/68982519/ansible-runner-consecutive-calls-mess-up-when-done-too-fast/68982761#68982761 – xCovelus Aug 30 '21 at 10:50
  • I've never worked with this library before. Do I have to close the thread somehow so it won't "dangle loose", or will the executer take care of that for me automatically if I only use the code as shown here? – Rumi P. Sep 22 '21 at 11:41
  • Note that for executing multiple futures, you can also use `executor.map()`. For example, `results = executor.map(fn, iter_of_args)`. https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Executor.map – Giraugh May 18 '22 at 02:57
  • with `concurrent.futures`, be very wary of deadlocks, especially when limiting the number of threads for `ThreadPoolExecutor()`. If one of threads tries to spawn another one when the pool is full, it's a full silent deadlock with no warnings. – LogicDaemon Apr 24 '23 at 16:51
  • Is `future.result()` blocking? Is there a non-blocking function to check whether the future has a result yet? – Cruncher Jul 20 '23 at 16:43
113

Jake's answer is good, but if you don't want to use a threadpool (you don't know how many threads you'll need, but create them as needed) then a good way to transmit information between threads is the built-in Queue.Queue class, as it offers thread safety.

I created the following decorator to make it act in a similar fashion to the threadpool:

def threaded(f, daemon=False):
    import Queue

    def wrapped_f(q, *args, **kwargs):
        '''this function calls the decorated function and puts the 
        result in a queue'''
        ret = f(*args, **kwargs)
        q.put(ret)

    def wrap(*args, **kwargs):
        '''this is the function returned from the decorator. It fires off
        wrapped_f in a new thread and returns the thread object with
        the result queue attached'''

        q = Queue.Queue()

        t = threading.Thread(target=wrapped_f, args=(q,)+args, kwargs=kwargs)
        t.daemon = daemon
        t.start()
        t.result_queue = q        
        return t

    return wrap

Then you just use it as:

@threaded
def long_task(x):
    import time
    x = x + 5
    time.sleep(5)
    return x

# does not block, returns Thread object
y = long_task(10)
print y

# this blocks, waiting for the result
result = y.result_queue.get()
print result

The decorated function creates a new thread each time it's called and returns a Thread object that contains the queue that will receive the result.

UPDATE

It's been quite a while since I posted this answer, but it still gets views so I thought I would update it to reflect the way I do this in newer versions of Python:

Python 3.2 added in the concurrent.futures module which provides a high-level interface for parallel tasks. It provides ThreadPoolExecutor and ProcessPoolExecutor, so you can use a thread or process pool with the same api.

One benefit of this api is that submitting a task to an Executor returns a Future object, which will complete with the return value of the callable you submit.

This makes attaching a queue object unnecessary, which simplifies the decorator quite a bit:

_DEFAULT_POOL = ThreadPoolExecutor()

def threadpool(f, executor=None):
    @wraps(f)
    def wrap(*args, **kwargs):
        return (executor or _DEFAULT_POOL).submit(f, *args, **kwargs)

    return wrap

This will use a default module threadpool executor if one is not passed in.

The usage is very similar to before:

@threadpool
def long_task(x):
    import time
    x = x + 5
    time.sleep(5)
    return x

# does not block, returns Future object
y = long_task(10)
print y

# this blocks, waiting for the result
result = y.result()
print result

If you're using Python 3.4+, one really nice feature of using this method (and Future objects in general) is that the returned future can be wrapped to turn it into an asyncio.Future with asyncio.wrap_future. This makes it work easily with coroutines:

result = await asyncio.wrap_future(long_task(10))

If you don't need access to the underlying concurrent.Future object, you can include the wrap in the decorator:

_DEFAULT_POOL = ThreadPoolExecutor()

def threadpool(f, executor=None):
    @wraps(f)
    def wrap(*args, **kwargs):
        return asyncio.wrap_future((executor or _DEFAULT_POOL).submit(f, *args, **kwargs))

    return wrap

Then, whenever you need to push cpu intensive or blocking code off the event loop thread, you can put it in a decorated function:

@threadpool
def some_long_calculation():
    ...

# this will suspend while the function is executed on a threadpool
result = await some_long_calculation()
bj0
  • 7,893
  • 5
  • 38
  • 49
  • I can't seem to get this to work; I get an error stating `AttributeError: 'module' object has no attribute 'Lock'` this appears to be emanating from the line `y = long_task(10)`... thoughts? – sadmicrowave Sep 13 '13 at 02:26
  • 1
    The code doesn't explicitly use Lock, so the problem could be somewhere else in your code. You may want to post a new SO question about it – bj0 Sep 14 '13 at 04:18
  • Why is result_queue an instance attribute? Would it be better if it was a class attribute so that users won't have to know to call result_queue when using @threaded which is not explicit and ambiguous? – nonbot Aug 08 '17 at 18:56
  • @t88, not sure what you mean, you need some way of accessing the result, which means you need to know what to call. If you want it to be something else you can subclass Thread and do what you want (this was a simple solution). The reason the queue needs to be attached to the thread is so that multiple calls/functions have their own queues – bj0 Aug 09 '17 at 19:24
  • Very succinct and easy to implement (Python 2.7) – Luke Madhanga Jul 05 '19 at 15:33
  • Where `@wraps` should be imported from? – Leonardo Rick Oct 17 '20 at 00:00
  • 1
    @LeonardoRick it's in the functools module: https://docs.python.org/3/library/functools.html#functools.wraps – bj0 Oct 19 '20 at 15:29
94

Another solution that doesn't require changing your existing code:

import Queue             # Python 2.x
#from queue import Queue # Python 3.x

from threading import Thread

def foo(bar):
    print 'hello {0}'.format(bar)     # Python 2.x
    #print('hello {0}'.format(bar))   # Python 3.x
    return 'foo'

que = Queue.Queue()      # Python 2.x
#que = Queue()           # Python 3.x

t = Thread(target=lambda q, arg1: q.put(foo(arg1)), args=(que, 'world!'))
t.start()
t.join()
result = que.get()
print result             # Python 2.x
#print(result)           # Python 3.x

It can be also easily adjusted to a multi-threaded environment:

import Queue             # Python 2.x
#from queue import Queue # Python 3.x
from threading import Thread

def foo(bar):
    print 'hello {0}'.format(bar)     # Python 2.x
    #print('hello {0}'.format(bar))   # Python 3.x
    return 'foo'

que = Queue.Queue()      # Python 2.x
#que = Queue()           # Python 3.x

threads_list = list()

t = Thread(target=lambda q, arg1: q.put(foo(arg1)), args=(que, 'world!'))
t.start()
threads_list.append(t)

# Add more threads here
...
threads_list.append(t2)
...
threads_list.append(t3)
...

# Join all the threads
for t in threads_list:
    t.join()

# Check thread's return value
while not que.empty():
    result = que.get()
    print result         # Python 2.x
    #print(result)       # Python 3.x
Arik
  • 1,257
  • 9
  • 13
  • t = Thread(target=lambda q, arg1: q.put(foo(arg1)), args=(que, 'world!')) whats q.put doing here, what does the Queue.Queue() does – vijay shanker Oct 29 '16 at 21:54
  • que = Queue.Queue() - creates a queue q.put(foo) - inserts foo() into the queue – Arik Nov 05 '16 at 11:09
  • 7
    For Python3, need to change to `from queue import Queue`. – Gino Mempin Feb 06 '19 at 07:25
  • 1
    This seems to be the least disruptive method (no need to dramatically restructure the original code base) to allow return value coming back to the main thread. – Fanchen Bao Dec 17 '19 at 23:07
  • But there is uncertainty in order of the threads to be executed. – Anandesh Sharma Mar 07 '21 at 20:53
  • @AnandeshSharma this is generally true for every multithreaded code, but it is unrelated to the question/answer. If you need specific info about a thread, you can always put it in the return value. – Arik Mar 08 '21 at 08:01
  • Python 3: That doesn't work, the first example. It keeps returning None. – Daniyal Warraich May 25 '21 at 22:49
  • 1
    @DaniyalWarraich I just ran both examples with Python 3 and they both work like a charm. Make sure you comment/uncomment the relevant lines. – Arik Jan 09 '22 at 13:02
  • @Arik yeah, I ran it again and it works perfectly. – Daniyal Warraich Jan 09 '22 at 20:11
  • can you explain what is the use of arg1? my code is totally different and I didn't get this part. – Jawad May 02 '22 at 12:27
  • 2
    @Jawad The lambda is the target of the thread. Lambda has 2 arguments: q and arg1. Then we pass two arguments to lambda: `args=(que, 'world!')` Eventually, arg1 is the input to the foo() function, `world1` in this example. – Arik May 06 '22 at 15:24
  • and if it has more than 1 argument, how are the arguments passed in the queue? – Federico David Jul 18 '22 at 23:56
  • @FedericoDavid here is an example with two arguments: `t = Thread(target=lambda q, arg1, arg2: q.put(foo(arg1, arg2)), args=(que, 'world!', 'scond_argument'))` – Arik Jul 21 '22 at 17:24
53

UPDATE:

I think there's a significantly simpler and more concise way to save the result of the thread, and in a way that keeps the interface virtually identical to the threading.Thread class (please let me know if there are edge cases - I haven't tested as much as my original post below):

import threading

class ConciseResult(threading.Thread):
    def run(self):
        self.result = self._target(*self._args, **self._kwargs)

To be robust and avoid potential errors:

import threading

class ConciseRobustResult(threading.Thread):
    def run(self):
        try:
            if self._target is not None:
                self.result = self._target(*self._args, **self._kwargs)
        finally:
            # Avoid a refcycle if the thread is running a function with
            # an argument that has a member that points to the thread.
            del self._target, self._args, self._kwargs

Short explanation: we override only the run method of threading.Thread, and modify nothing else. This allows us to use everything else the threading.Thread class does for us, without needing to worry about missing potential edge cases such as _private attribute assignments or custom attribute modifications in the way that my original post does.

We can verify that we only modify the run method by looking at the output of help(ConciseResult) and help(ConciseRobustResult). The only method/attribute/descriptor included under Methods defined here: is run, and everything else comes from the inherited threading.Thread base class (see the Methods inherited from threading.Thread: section).

To test either of these implementations using the example code below, substitute ConciseResult or ConciseRobustResult for ThreadWithResult in the main function below.

Original post using a closure function in the init method:

Most answers I've found are long and require being familiar with other modules or advanced python features, and will be rather confusing to someone unless they're already familiar with everything the answer talks about.

Working code for a simplified approach:

import threading

class ThreadWithResult(threading.Thread):
    def __init__(self, group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None):
        def function():
            self.result = target(*args, **kwargs)
        super().__init__(group=group, target=function, name=name, daemon=daemon)

Example code:

import time, random


def function_to_thread(n):
    count = 0
    while count < 3:
            print(f'still running thread {n}')
            count +=1
            time.sleep(3)
    result = random.random()
    print(f'Return value of thread {n} should be: {result}')
    return result


def main():
    thread1 = ThreadWithResult(target=function_to_thread, args=(1,))
    thread2 = ThreadWithResult(target=function_to_thread, args=(2,))
    thread1.start()
    thread2.start()
    thread1.join()
    thread2.join()
    print(thread1.result)
    print(thread2.result)

main()

Explanation: I wanted to simplify things significantly, so I created a ThreadWithResult class and had it inherit from threading.Thread. The nested function function in __init__ calls the threaded function we want to save the value of, and saves the result of that nested function as the instance attribute self.result after the thread finishes executing.

Creating an instance of this is identical to creating an instance of threading.Thread. Pass in the function you want to run on a new thread to the target argument and any arguments that your function might need to the args argument and any keyword arguments to the kwargs argument.

e.g.

my_thread = ThreadWithResult(target=my_function, args=(arg1, arg2, arg3))

I think this is significantly easier to understand than the vast majority of answers, and this approach requires no extra imports! I included the time and random module to simulate the behavior of a thread, but they're not required to achieve the functionality asked in the original question.

I know I'm answering this looong after the question was asked, but I hope this can help more people in the future!


EDIT: I created the save-thread-result PyPI package to allow you to access the same code above and reuse it across projects (GitHub code is here). The PyPI package fully extends the threading.Thread class, so you can set any attributes you would set on threading.thread on the ThreadWithResult class as well!

The original answer above goes over the main idea behind this subclass, but for more information, see the more detailed explanation (from the module docstring) here.

Quick usage example:

pip3 install -U save-thread-result     # MacOS/Linux
pip  install -U save-thread-result     # Windows

python3     # MacOS/Linux
python      # Windows
from save_thread_result import ThreadWithResult

# As of Release 0.0.3, you can also specify values for
#`group`, `name`, and `daemon` if you want to set those
# values manually.
thread = ThreadWithResult(
    target = my_function,
    args   = (my_function_arg1, my_function_arg2, ...)
    kwargs = {my_function_kwarg1: kwarg1_value, my_function_kwarg2: kwarg2_value, ...}
)

thread.start()
thread.join()
if getattr(thread, 'result', None):
    print(thread.result)
else:
    # thread.result attribute not set - something caused
    # the thread to terminate BEFORE the thread finished
    # executing the function passed in through the
    # `target` argument
    print('ERROR! Something went wrong while executing this thread, and the function you passed in did NOT complete!!')

# seeing help about the class and information about the threading.Thread super class methods and attributes available:
help(ThreadWithResult)
slow-but-steady
  • 961
  • 9
  • 15
  • 5
    Also just edited the answer to include a link to a PyPI module I made for this. The core code will probably stay the same, but I want to include some better usage examples and make the README a bit more detailed, so I'll incrementally add them and then update the package to 1.0.0 and `Stable` Development Status after that! I'll update the answer here after I do so as well :) – slow-but-steady Jan 27 '21 at 03:36
  • When returning a bool in a function, I got these errors: "TypeError: 'bool' object is not callable" and AttributeError: 'ThreadWithResult' object has no attribute 'result' – SiL3NC3 Mar 29 '23 at 07:59
  • The `AttributeError: 'ThreadWithResult' object has no attribute 'result'` appears when the thread is still running, or did not complete due to an exception after starting the thread somewhere (this is why `if getattr(thread, 'result', None)` is in the example). The `"TypeError: 'bool' object is not callable"` error is harder to debug without a code snippet, but my guess is one of the arguments to `ThreadWithResult` is not provided correctly. Are you using keyword arguments when defining the instance of `ThreadWithResult`? If you can post a gist/snippet of your code, that would also be helpful – slow-but-steady Apr 04 '23 at 03:39
34

Parris / kindall's answer join/return answer ported to Python 3:

from threading import Thread

def foo(bar):
    print('hello {0}'.format(bar))
    return "foo"

class ThreadWithReturnValue(Thread):
    def __init__(self, group=None, target=None, name=None, args=(), kwargs=None, *, daemon=None):
        Thread.__init__(self, group, target, name, args, kwargs, daemon=daemon)

        self._return = None

    def run(self):
        if self._target is not None:
            self._return = self._target(*self._args, **self._kwargs)

    def join(self):
        Thread.join(self)
        return self._return


twrv = ThreadWithReturnValue(target=foo, args=('world!',))

twrv.start()
print(twrv.join())   # prints foo

Note, the Thread class is implemented differently in Python 3.

Community
  • 1
  • 1
GuySoft
  • 1,723
  • 22
  • 30
  • 2
    join takes a timeout parameter that should be passed along – c z Jan 05 '17 at 11:57
  • documentation states that the only methods to override should be: __init__() and run() https://docs.python.org/3/library/threading.html#thread-objects – lmiguelmh Aug 06 '21 at 19:10
  • @lmiguelmh, This is a continuation of an earlier answer that explains that yes, this messes with internals. – ikegami Oct 26 '22 at 15:19
29

I stole kindall's answer and cleaned it up just a little bit.

The key part is adding *args and **kwargs to join() in order to handle the timeout

class threadWithReturn(Thread):
    def __init__(self, *args, **kwargs):
        super(threadWithReturn, self).__init__(*args, **kwargs)
        
        self._return = None
    
    def run(self):
        if self._Thread__target is not None:
            self._return = self._Thread__target(*self._Thread__args, **self._Thread__kwargs)
    
    def join(self, *args, **kwargs):
        super(threadWithReturn, self).join(*args, **kwargs)
        
        return self._return

UPDATED ANSWER BELOW

This is my most popularly upvoted answer, so I decided to update with code that will run on both py2 and py3.

Additionally, I see many answers to this question that show a lack of comprehension regarding Thread.join(). Some completely fail to handle the timeout arg. But there is also a corner-case that you should be aware of regarding instances when you have (1) a target function that can return None and (2) you also pass the timeout arg to join(). Please see "TEST 4" to understand this corner case.

ThreadWithReturn class that works with py2 and py3:

import sys
from threading import Thread
from builtins import super    # https://stackoverflow.com/a/30159479

_thread_target_key, _thread_args_key, _thread_kwargs_key = (
    ('_target', '_args', '_kwargs')
    if sys.version_info >= (3, 0) else
    ('_Thread__target', '_Thread__args', '_Thread__kwargs')
)

class ThreadWithReturn(Thread):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self._return = None
    
    def run(self):
        target = getattr(self, _thread_target_key)
        if target is not None:
            self._return = target(
                *getattr(self, _thread_args_key),
                **getattr(self, _thread_kwargs_key)
            )
    
    def join(self, *args, **kwargs):
        super().join(*args, **kwargs)
        return self._return

Some sample tests are shown below:

import time, random

# TEST TARGET FUNCTION
def giveMe(arg, seconds=None):
    if not seconds is None:
        time.sleep(seconds)
    return arg

# TEST 1
my_thread = ThreadWithReturn(target=giveMe, args=('stringy',))
my_thread.start()
returned = my_thread.join()
# (returned == 'stringy')

# TEST 2
my_thread = ThreadWithReturn(target=giveMe, args=(None,))
my_thread.start()
returned = my_thread.join()
# (returned is None)

# TEST 3
my_thread = ThreadWithReturn(target=giveMe, args=('stringy',), kwargs={'seconds': 5})
my_thread.start()
returned = my_thread.join(timeout=2)
# (returned is None) # because join() timed out before giveMe() finished

# TEST 4
my_thread = ThreadWithReturn(target=giveMe, args=(None,), kwargs={'seconds': 5})
my_thread.start()
returned = my_thread.join(timeout=random.randint(1, 10))

Can you identify the corner-case that we may possibly encounter with TEST 4?

The problem is that we expect giveMe() to return None (see TEST 2), but we also expect join() to return None if it times out.

returned is None means either:

(1) that's what giveMe() returned, or

(2) join() timed out

This example is trivial since we know that giveMe() will always return None. But in real-world instance (where the target may legitimately return None or something else) we'd want to explicitly check for what happened.

Below is how to address this corner-case:

# TEST 4
my_thread = ThreadWithReturn(target=giveMe, args=(None,), kwargs={'seconds': 5})
my_thread.start()
returned = my_thread.join(timeout=random.randint(1, 10))

if my_thread.isAlive():
    # returned is None because join() timed out
    # this also means that giveMe() is still running in the background
    pass
    # handle this based on your app's logic
else:
    # join() is finished, and so is giveMe()
    # BUT we could also be in a race condition, so we need to update returned, just in case
    returned = my_thread.join()
sam-6174
  • 3,104
  • 1
  • 33
  • 34
  • Do you know the _Thread_target equivalent for Python3? That attribute doesn't exist in Python3. – GreySage Jun 24 '16 at 19:20
  • I looked in the threading.py file, it turns out it is _target (other attributes are similarly named). – GreySage Jun 24 '16 at 20:16
  • You could avoid accessing the private variables of the thread class, if you save the `target`, `args`, and `kwargs` arguments to __init__ as member variables in your class. – Tolli Aug 18 '16 at 22:48
  • @GreySage See my answer, [I ported this block to python3](http://stackoverflow.com/a/40344234/311268) below – GuySoft Oct 31 '16 at 14:32
  • @GreySage answer now supports py2 and py3 – sam-6174 Jul 21 '18 at 03:50
  • @GuySoft note that your answer does not include the `timeout` arg for `join` – sam-6174 Jul 21 '18 at 03:50
27

Using Queue :

import threading, queue

def calc_square(num, out_queue1):
  l = []
  for x in num:
    l.append(x*x)
  out_queue1.put(l)


arr = [1,2,3,4,5,6,7,8,9,10]
out_queue1=queue.Queue()
t1=threading.Thread(target=calc_square, args=(arr,out_queue1))
t1.start()
t1.join()
print (out_queue1.get())
user341143
  • 448
  • 4
  • 12
  • 3
    Really like this sollution, short and sweet. If your function reads an input queue, and you add to the `out_queue1` you will need to loop over `out_queue1.get()` and catch the Queue.Empty exception: `ret = [] ; try: ; while True; ret.append(out_queue1.get(block=False)) ; except Queue.Empty: ; pass`. Semi-colons to simulate line breaks. – sastorsl Dec 10 '18 at 11:58
8

My solution to the problem is to wrap the function and thread in a class. Does not require using pools,queues, or c type variable passing. It is also non blocking. You check status instead. See example of how to use it at end of code.

import threading

class ThreadWorker():
    '''
    The basic idea is given a function create an object.
    The object can then run the function in a thread.
    It provides a wrapper to start it,check its status,and get data out the function.
    '''
    def __init__(self,func):
        self.thread = None
        self.data = None
        self.func = self.save_data(func)

    def save_data(self,func):
        '''modify function to save its returned data'''
        def new_func(*args, **kwargs):
            self.data=func(*args, **kwargs)

        return new_func

    def start(self,params):
        self.data = None
        if self.thread is not None:
            if self.thread.isAlive():
                return 'running' #could raise exception here

        #unless thread exists and is alive start or restart it
        self.thread = threading.Thread(target=self.func,args=params)
        self.thread.start()
        return 'started'

    def status(self):
        if self.thread is None:
            return 'not_started'
        else:
            if self.thread.isAlive():
                return 'running'
            else:
                return 'finished'

    def get_results(self):
        if self.thread is None:
            return 'not_started' #could return exception
        else:
            if self.thread.isAlive():
                return 'running'
            else:
                return self.data

def add(x,y):
    return x +y

add_worker = ThreadWorker(add)
print add_worker.start((1,2,))
print add_worker.status()
print add_worker.get_results()
Peter Lonjers
  • 205
  • 3
  • 4
  • how would you handle an exception? let's say the add function was given and int and a str. would all the threads fail or would only one fail? – user1745713 Mar 23 '16 at 17:58
  • +1 for thinking like I do. Seriously - this is the least effort. And if you're coding in Python - your stuff should automatically be done in a class, so this is legit the most sensible way to go about this issue. – Vaidøtas I. Jan 21 '21 at 00:13
7

Taking into consideration @iman comment on @JakeBiesinger answer I have recomposed it to have various number of threads:

from multiprocessing.pool import ThreadPool

def foo(bar, baz):
    print 'hello {0}'.format(bar)
    return 'foo' + baz

numOfThreads = 3 
results = []

pool = ThreadPool(numOfThreads)

for i in range(0, numOfThreads):
    results.append(pool.apply_async(foo, ('world', 'foo'))) # tuple of args for foo)

# do some other stuff in the main process
# ...
# ...

results = [r.get() for r in results]
print results

pool.close()
pool.join()
Guy Avraham
  • 3,482
  • 3
  • 38
  • 50
6

I'm using this wrapper, which comfortably turns any function for running in a Thread - taking care of its return value or exception. It doesn't add Queue overhead.

def threading_func(f):
    """Decorator for running a function in a thread and handling its return
    value or exception"""
    def start(*args, **kw):
        def run():
            try:
                th.ret = f(*args, **kw)
            except:
                th.exc = sys.exc_info()
        def get(timeout=None):
            th.join(timeout)
            if th.exc:
                raise th.exc[0], th.exc[1], th.exc[2] # py2
                ##raise th.exc[1] #py3                
            return th.ret
        th = threading.Thread(None, run)
        th.exc = None
        th.get = get
        th.start()
        return th
    return start

Usage Examples

def f(x):
    return 2.5 * x
th = threading_func(f)(4)
print("still running?:", th.is_alive())
print("result:", th.get(timeout=1.0))

@threading_func
def th_mul(a, b):
    return a * b
th = th_mul("text", 2.5)

try:
    print(th.get())
except TypeError:
    print("exception thrown ok.")

Notes on threading module

Comfortable return value & exception handling of a threaded function is a frequent "Pythonic" need and should indeed already be offered by the threading module - possibly directly in the standard Thread class. ThreadPool has way too much overhead for simple tasks - 3 managing threads, lots of bureaucracy. Unfortunately Thread's layout was copied from Java originally - which you see e.g. from the still useless 1st (!) constructor parameter group.

kxr
  • 4,841
  • 1
  • 49
  • 32
  • the first constructor is not useless, its reserved there for future implementation.. from python parallel programming cookbook – vijay shanker Oct 29 '16 at 21:23
  • Nice solution! Just for curiosity, why in the 'get' you are not simply raising exception as it is (i.e. raise ex)? – cabbi Oct 04 '20 at 09:43
6

Based of what kindall mentioned, here's the more generic solution that works with Python3.

import threading

class ThreadWithReturnValue(threading.Thread):
    def __init__(self, *init_args, **init_kwargs):
        threading.Thread.__init__(self, *init_args, **init_kwargs)
        self._return = None
    def run(self):
        self._return = self._target(*self._args, **self._kwargs)
    def join(self):
        threading.Thread.join(self)
        return self._return

Usage

        th = ThreadWithReturnValue(target=requests.get, args=('http://www.google.com',))
        th.start()
        response = th.join()
        response.status_code  # => 200
Pithikos
  • 18,827
  • 15
  • 113
  • 136
6

The shortest and simplest way I've found to do this is to take advantage of Python classes and their dynamic properties. You can retrieve the current thread from within the context of your spawned thread using threading.current_thread(), and assign the return value to a property.

import threading

def some_target_function():
    # Your code here.
    threading.current_thread().return_value = "Some return value."

your_thread = threading.Thread(target=some_target_function)
your_thread.start()
your_thread.join()

return_value = your_thread.return_value
print(return_value)
Alec Cureau
  • 124
  • 1
  • 2
5

join always return None, i think you should subclass Thread to handle return codes and so.

BrainStorm
  • 2,036
  • 1
  • 16
  • 23
4

You can define a mutable above the scope of the threaded function, and add the result to that. (I also modified the code to be python3 compatible)

returns = {}
def foo(bar):
    print('hello {0}'.format(bar))
    returns[bar] = 'foo'

from threading import Thread
t = Thread(target=foo, args=('world!',))
t.start()
t.join()
print(returns)

This returns {'world!': 'foo'}

If you use the function input as the key to your results dict, every unique input is guaranteed to give an entry in the results

Thijs D
  • 762
  • 5
  • 20
3

This is a pretty old question, but I wanted to share a simple solution that has worked for me and helped my dev process.

The methodology behind this answer is the fact that the "new" target function, inner is assigning the result of the original function (passed through the __init__ function) to the result instance attribute of the wrapper through something called closure.

This allows the wrapper class to hold onto the return value for callers to access at anytime.

NOTE: This method doesn't need to use any mangled methods or private methods of the threading.Thread class, although yield functions have not been considered (OP did not mention yield functions).

Enjoy!

from threading import Thread as _Thread


class ThreadWrapper:
    def __init__(self, target, *args, **kwargs):
        self.result = None
        self._target = self._build_threaded_fn(target)
        self.thread = _Thread(
            target=self._target,
            *args,
            **kwargs
        )

    def _build_threaded_fn(self, func):
        def inner(*args, **kwargs):
            self.result = func(*args, **kwargs)
        return inner

Additionally, you can run pytest (assuming you have it installed) with the following code to demonstrate the results:

import time
from commons import ThreadWrapper


def test():

    def target():
        time.sleep(1)
        return 'Hello'

    wrapper = ThreadWrapper(target=target)
    wrapper.thread.start()

    r = wrapper.result
    assert r is None

    time.sleep(2)

    r = wrapper.result
    assert r == 'Hello'
  • 1
    I've found at least 2 errors in ThreadWrapper : 1) "_build_target_fn" probably mean "_build_threaded_fn" 2) "self.target" does not exist, you mean "self._target" – Pierre Puiseux Aug 17 '22 at 16:27
  • 1
    @PierrePuiseux Sorry bout that. Fixed. Thanks and good catch! – amirfounder Sep 11 '22 at 17:58
  • so much underrated. This is without the unneeded inheritance, and `__slots__`-able. My guess, it's the fastest and cleanest solution. There are some missing optimizations though, like there's no need to save `self._target`, it might as well be local. – LogicDaemon Apr 24 '23 at 17:02
2

Define your target to
1) take an argument q
2) replace any statements return foo with q.put(foo); return

so a function

def func(a):
    ans = a * a
    return ans

would become

def func(a, q):
    ans = a * a
    q.put(ans)
    return

and then you would proceed as such

from Queue import Queue
from threading import Thread

ans_q = Queue()
arg_tups = [(i, ans_q) for i in xrange(10)]

threads = [Thread(target=func, args=arg_tup) for arg_tup in arg_tups]
_ = [t.start() for t in threads]
_ = [t.join() for t in threads]
results = [q.get() for _ in xrange(len(threads))]

And you can use function decorators/wrappers to make it so you can use your existing functions as target without modifying them, but follow this basic scheme.

tscizzle
  • 11,191
  • 15
  • 54
  • 88
2

GuySoft's idea is great, but I think the object does not necessarily have to inherit from Thread and start() could be removed from interface:

from threading import Thread
import queue
class ThreadWithReturnValue(object):
    def __init__(self, target=None, args=(), **kwargs):
        self._que = queue.Queue()
        self._t = Thread(target=lambda q,arg1,kwargs1: q.put(target(*arg1, **kwargs1)) ,
                args=(self._que, args, kwargs), )
        self._t.start()

    def join(self):
        self._t.join()
        return self._que.get()


def foo(bar):
    print('hello {0}'.format(bar))
    return "foo"

twrv = ThreadWithReturnValue(target=foo, args=('world!',))

print(twrv.join())   # prints foo
pandy.song
  • 51
  • 3
2

You can use pool.apply_async() of ThreadPool() to return the value from test() as shown below:

from multiprocessing.pool import ThreadPool

def test(num1, num2):
    return num1 + num2

pool = ThreadPool(processes=1) # Here
result = pool.apply_async(test, (2, 3)) # Here
print(result.get()) # 5

And, you can also use submit() of concurrent.futures.ThreadPoolExecutor() to return the value from test() as shown below:

from concurrent.futures import ThreadPoolExecutor

def test(num1, num2):
    return num1 + num2

with ThreadPoolExecutor(max_workers=1) as executor:
    future = executor.submit(test, 2, 3) # Here
print(future.result()) # 5

And, instead of return, you can use the array result as shown below:

from threading import Thread

def test(num1, num2, r):
    r[0] = num1 + num2 # Instead of "return"

result = [None] # Here

thread = Thread(target=test, args=(2, 3, result))
thread.start()
thread.join()
print(result[0]) # 5

And instead of return, you can also use the queue result as shown below:

from threading import Thread
import queue

def test(num1, num2, q):
    q.put(num1 + num2) # Instead of "return" 

queue = queue.Queue() # Here

thread = Thread(target=test, args=(2, 3, queue))
thread.start()
thread.join()
print(queue.get()) # '5'
Super Kai - Kazuya Ito
  • 22,221
  • 10
  • 124
  • 129
1

As mentioned multiprocessing pool is much slower than basic threading. Using queues as proposeded in some answers here is a very effective alternative. I have use it with dictionaries in order to be able run a lot of small threads and recuperate multiple answers by combining them with dictionaries:

#!/usr/bin/env python3

import threading
# use Queue for python2
import queue
import random

LETTERS = 'abcdefghijklmnopqrstuvwxyz'
LETTERS = [ x for x in LETTERS ]

NUMBERS = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

def randoms(k, q):
    result = dict()
    result['letter'] = random.choice(LETTERS)
    result['number'] = random.choice(NUMBERS)
    q.put({k: result})

threads = list()
q = queue.Queue()
results = dict()

for name in ('alpha', 'oscar', 'yankee',):
    threads.append( threading.Thread(target=randoms, args=(name, q)) )
    threads[-1].start()
_ = [ t.join() for t in threads ]
while not q.empty():
    results.update(q.get())

print(results)
Yves Dorfsman
  • 2,684
  • 3
  • 20
  • 28
1

Here is the version that I created of @Kindall's answer.

This version makes it so that all you have to do is input your command with arguments to create the new thread.

This was made with Python 3.8:

from threading import Thread
from typing import Any

def test(plug, plug2, plug3):
    print(f"hello {plug}")
    print(f'I am the second plug : {plug2}')
    print(plug3)
    return 'I am the return Value!'

def test2(msg):
    return f'I am from the second test: {msg}'

def test3():
    print('hello world')

def NewThread(com, Returning: bool, *arguments) -> Any:
    """
    Will create a new thread for a function/command.

    :param com: Command to be Executed
    :param arguments: Arguments to be sent to Command
    :param Returning: True/False Will this command need to return anything
    """
    class NewThreadWorker(Thread):
        def __init__(self, group = None, target = None, name = None, args = (), kwargs = None, *,
                     daemon = None):
            Thread.__init__(self, group, target, name, args, kwargs, daemon = daemon)
            
            self._return = None
        
        def run(self):
            if self._target is not None:
                self._return = self._target(*self._args, **self._kwargs)
        
        def join(self):
            Thread.join(self)
            return self._return
    
    ntw = NewThreadWorker(target = com, args = (*arguments,))
    ntw.start()
    if Returning:
        return ntw.join()

if __name__ == "__main__":
    print(NewThread(test, True, 'hi', 'test', test2('hi')))
    NewThread(test3, True)
Tomerikoo
  • 18,379
  • 16
  • 47
  • 61
ViperSniper0501
  • 300
  • 6
  • 15
0

One usual solution is to wrap your function foo with a decorator like

result = queue.Queue()

def task_wrapper(*args):
    result.put(target(*args))

Then the whole code may looks like that

result = queue.Queue()

def task_wrapper(*args):
    result.put(target(*args))

threads = [threading.Thread(target=task_wrapper, args=args) for args in args_list]

for t in threads:
    t.start()
    while(True):
        if(len(threading.enumerate()) < max_num):
            break
for t in threads:
    t.join()
return result

Note

One important issue is that the return values may be unorderred. (In fact, the return value is not necessarily saved to the queue, since you can choose arbitrary thread-safe data structure )

T. Jiang
  • 55
  • 5
0

Kindall's answer in Python3

class ThreadWithReturnValue(Thread):
    def __init__(self, group=None, target=None, name=None,
                 args=(), kwargs={}, *, daemon=None):
        Thread.__init__(self, group, target, name, args, kwargs, daemon)
        self._return = None 

    def run(self):
        try:
            if self._target:
                self._return = self._target(*self._args, **self._kwargs)
        finally:
            del self._target, self._args, self._kwargs 

    def join(self,timeout=None):
        Thread.join(self,timeout)
        return self._return
Smart Manoj
  • 5,230
  • 4
  • 34
  • 59
0

I don't Know whether or not this worked for you guys but I choose to create a global object [mostly dictionaries or nested arrays] that way a function can access the object and mutate it, I know it takes more resources but we are not dealing with quantum science,So , I guess we can give a little bit of more ram provided Ram Consumption increases linearly with CPU usage. Here is an example code:

import requests 
import json 
import string 
import random 
import threading
import time 
dictionary = {} 
def get_val1(L): 
    print('#1')
    for n,elem in enumerate(L):
        dictionary[elem]=json.loads(requests.post(f'https://api.example.com?text={elem}&Return=JSON').text)
def get_val2(L): 
    print('#2')
    for n,elem in enumerate(L):
        dictionary[elem]=json.loads(requests.post(f'https://api.example.com?text={elem}&Return=JSON').text)
def get_val3(L): 
    print('#3')
    for n,elem in enumerate(L):
        dictionary[elem]=json.loads(requests.post(f'https://api.example.com?text={elem}&Return=JSON').text)
def get_val4(L): 
    print('#4')
    for n,elem in enumerate(L):
        dictionary[elem]=json.loads(requests.post(f'https://api.example.com?text={elem}&Return=JSON').text)
t1 = threading.Thread(target=get_val1,args=(L[0],)) 
t2 = threading.Thread(target=get_val2,args=(L[1],)) 
t3 = threading.Thread(target=get_val3,args=(L[2],))
t4 = threading.Thread(target=get_val4,args=(L[3],))

t1.start()
t2.start()
t3.start()
t4.start()

t1.join()
t2.join()
t3.join()
t4.join()

This program runs 4 threads each of which returns some data for some text L[i] for i in L, the returned data from API is stored in the dictionary, It may vary from program to program whether it is beneficial or not, for small to medium heavy computation task this object mutation works pretty fast and uses a few percentages of more resources.

-2

I know this thread is old.... but I faced the same problem... If you are willing to use thread.join()

import threading

class test:

    def __init__(self):
        self.msg=""

    def hello(self,bar):
        print('hello {}'.format(bar))
        self.msg="foo"


    def main(self):
        thread = threading.Thread(target=self.hello, args=('world!',))
        thread.start()
        thread.join()
        print(self.msg)

g=test()
g.main()
-3

Best way... Define a global variable, then change the variable in the threaded function. Nothing to pass in or retrieve back

from threading import Thread

# global var
random_global_var = 5

def function():
    global random_global_var
    random_global_var += 1

domath = Thread(target=function)
domath.start()
domath.join()
print(random_global_var)

# result: 6