1

From reading this article, http://www.artima.com/articles/io_design_patternsP.html

I understand that the proactor pattern is fully asynchronous while the reactor pattern is not.

All the popular asynchronous event-driven networking frameworks that I'm aware of (Twisted, Gevent, Tornado, Asyncio, and Node.js) apply the reactor design pattern. Why is that? Doesn't the proactor pattern provide better performance?

Michael B
  • 269
  • 1
  • 3
  • 10
  • Note https://stackoverflow.com/questions/9138294/what-is-the-difference-between-event-driven-model-and-reactor-pattern/9143390#9143390 regarding Twisted's use of "reactor" vs "proactor". – Jean-Paul Calderone Jun 30 '16 at 09:50

1 Answers1

3

Because, as this article you cited points out, a Proactor pattern requires kernel-level (internal) support for asynchronous I/O, and not all OSes provide that natively within their user-facing I/O layer. The frameworks you mention are all multi-platform toolkits/modules, so need to support a wide variety of OS I/O architectures.

Without having to provide platform-specific "backend" implementations for each OS, these frameworks opt for the "lowest common denominator" design pattern. The Reactor pattern is more universal, and is hence can be implemented natively without requiring different backends.

user2137858
  • 229
  • 2
  • 3
  • @StephenRauch can you be more specific? Why do you think the user should take the tour? – pgpb.padilla Apr 06 '17 at 05:06
  • The exact wording is "While the event demultiplexor waits, the OS executes the read operation in a parallel kernel thread, puts data into a user-defined buffer, and notifies the event demultiplex or that the read is complete. However, I believe, it is also true that application software or support software is not precluded from doing this and controlling buffer access. – RichMeister Jul 12 '18 at 21:55