The problem is that t1.start()
doesn't return values...
Of course not. t1
hasn't finished at this point. If start
waited for the background thread to finish, there would be absolutely no reason to use threads in the first place.
You need to set things up so the background threads post their work somewhere and signal you that they're done, then wait until both threads have signaled you. A queue is one way to do that. So is a shared variable plus a Condition
. Or, in this case, just a shared variable plus join
ing the thread. But I'll show one way to do it with a queue, since that's what you asked for:
def enthread(target, args):
q = queue.Queue()
def wrapper():
q.put(target(*args))
t = threading.Thread(target=wrapper)
t.start()
return q
q1 = enthread(target = derivative, args=(lst[0], var))
q2 = enthread(target = derivative, args=(lst[2], var))
return [q1.get(), '+', q2.get()]
What I did there is to create a queue, pass it into the target function for the background thread (which wraps the real target function), and have the background thread put its result on the queue. Then, the main thread can just wait on the queue.
Note that this isn't join
ing each thread, which can be a problem. But hopefully you can see how to expand on the code to make it more robust.
Also note that we're explicitly waiting for thread 1 to finish before checking on thread 2. In a situation where you can't do anything until you have all the results anyway, that's fine. But in many applications, you'll want a single queue, so you can to pick up the results as they come in (tagging the values in some way if you need to be able to reconstruct the original order).
A much better solution is to use a higher-level abstraction, like a thread pool or a future (or an executor, which combines both abstractions into one). But it's worth understanding how these pieces work first, then learning how to do things the easy way. So, once you understand why this works, go read the docs on concurrent.futures
.
Finally, assuming you're using CPython or another GIL-based implementation—which you probably are—and that derive_solver
function isn't a C extension function explicitly designed to do most of its work without the GIL, this isn't going to be a good idea in the first place. Threads are great when you need concurrency without parallelism (because your code is simpler that way, or because it's I/O bound), but when you're actually trying to benefit from multiple cores, they aren't the answer, because only one thread can run the interpreter at the time. Use multiprocessing
(or just concurrent.futures.ProcessPoolExecutor
instead of concurrent.futures.ThreadPoolExecutor
) if you need parallelism.