1

I saw this page suggesting the usage of defer module to execute a series of tasks asynchronously.

I want to use it for my Project:

  1. Calculate the median of each list of numbers I have (Got a list, containing lists of numbers)
  2. Get the minimum and maximum medians, of all medians.

But for the matter of fact, I did not quite understand how to use it.

I would love some explanation about defer in python, and whether you think it is the appropriate way achieving my goal (considering the Global Interpreter Lock).

Thanks in advance!

Community
  • 1
  • 1
NI6
  • 2,477
  • 5
  • 17
  • 28
  • The answer talks about Twisted there; you want to watch my colleague [Łukasz Langa explain asynchronous programming at Pycon 2016](https://www.youtube.com/watch?v=l4Nn-y9ktd4) and use Python 3.5 or newer to get the benefit of using the more readable `async`/`await` syntax to achieve the same. – Martijn Pieters May 30 '17 at 06:34
  • That said, using coroutines gives you better latency control in I/O heavy applications. CPU-heavy applications (like calculating stats on numbers), will **not** benefit. Use multiprocessing instead. – Martijn Pieters May 30 '17 at 06:35

1 Answers1

3

No, using asynchronous programming (cooperative routines, aka coroutines), will not help your use case. Async is great for I/O intensive workloads, or anything else that has to wait for slower, external events to fire.

Coroutines work because they give up control (yield) to other coroutines whenever they have to wait for something (usually for some I/O to take place). If they do this frequently, the event loop can alternate between loads of coroutines, often far more than what threading could achieve, with a simpler programming model (no need to lock data structures all the time).

Your use-case is not waiting for I/O however; you have a computationally heavy workload. Such workloads do not have obvious places to yield, and because they don't need wait for external events, there is no reason to do so anyway. For such a workload, use a multiprocessing model to do work in parallel on different CPU cores.

Asynchronous programming does not defeat the GIL either, but does give the event loop the opportunity to move the waiting for I/O parts to C code that can unlock the GIL and handle all that I/O processing in parallel while other Python code (in a different coroutine) can execute.

See this talk by my colleague Łukasz Langa at PyCON 2016 for a good introduction to async programming.

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343