0

Are there explicit considerations about the latency of any single request in the node.js event loop? AFAICT every IO call returns an eventEmitter which emits an event. The processing of all the events is multiplexed through the use of a pipe. So it is possible that the event that needs to be processed for an important request may be placed too far back into the pipe. Is there some sort of priority queue that can be used to schedule the order of execution of eventHandlers ?

Here's why I asked this question in the first place. I decided to give a gist.github link because the reason is long and related to the technical question

Pushpendre
  • 795
  • 7
  • 19

2 Answers2

1

About priorities of execution in the event loop:

  1. setImmediate() runs before setTimeout(fn, 0)
  2. nextTick() triggers the callback on next tick (iteration)

Natively the event loop in node.js does not support priorities. You can always implement your own priority queue or use an existing one like here and assign your functions to the priority queue.

Tal Avissar
  • 10,088
  • 6
  • 45
  • 70
  • Your point #1 is not always true. See the example in this article: [setImmediate() vs nextTick() vs setTimeout(fn,0) – in depth explanation](http://voidcanvas.com/setimmediate-vs-nexttick-vs-settimeout/). – jfriend00 Jun 25 '17 at 04:29
1

It's not clear exactly what you're asking here. Your Javascript does not directly add things to the event queue (that is only done with native code). Instead, you call some async operation and the native code behind that operation adds something to the event queue when the async operation completes.

This article The Node.js Event Loop, Timers, and process.nextTick() gives you a lot of details about how the event queue is serviced and how it handles different types of events (timers, I/O, etc...).

In general things are FIFO (first in, first out) within an event type with some exceptions.

process.nextTick() will run its callback BEFORE waiting I/O events.

setImmediate() will runs its callback AFTER waiting I/O events.

More detail here: setImmediate vs. nextTick and nextTick vs setImmediate, visual explanation and setTimeout vs. setImmediate vs. nextTick

So it is possible that the event that needs to be processed for an important request may be placed too far back into the pipe.

You'd have to show us the specific situation you're concerned about. If you yourself are scheduling a callback with setTimeout(), setImmediate() or process.nextTick(), then you have some control over when it happens by which of these three you pick. If you aren't scheduling it yourself (e.g. it's the completion callback of some async operation), then you don't control it's scheduling in the event loop. It will go into the sub-queue that matches its type and be served FIFO from that phase or the event loop (as described in the above articles).

Is there some sort of priority queue that can be used to schedule the order of execution of eventHandlers ?

There is no exposed priority system. Within an event type, things are FIFO. Again, if you give us an actual coding example so we can see exactly what you're trying to do, we could offer some help on what your choices are. You may be able to use the setTimeout(), setImmediate() and process.nextTick() tools that are already available or you may want to implement your own task queuing and prioritization system that runs off some of the above three methods that allows you to prioritize things that are already queued yourself.

jfriend00
  • 683,504
  • 96
  • 985
  • 979
  • Hi @jfriend00 thank you for the detailed response. I am actually just learning about web api development and trying to understand differences between `gevent` `twisted` `node.js` `asyncio` or `j2ee` `tomcat` etc. I just watched Ryan Dahl's 2009 video and based on his explanation of the event loop this was the question that came to my mind. Honestly, if there was a book/blogpost which sort of went over all the `java`, `javascript` and `python` web/rpc frameworks and categorized them by features/applications that would be helpful, but I doubt that such a thing exists. – Pushpendre Jun 25 '17 at 04:49
  • @Pushpendre - It would be rare that one would select a language/framework because of the things you are asking about so it's not likely a point of comparison in a major article. Instead, you'd have to research how each language works itself and put together your own comparison. We could only really help further if you explained your actual use cases so we can understand why you care and what you're trying to accomplish. I've never found node.js to be limiting in this regard for any of the code I've written so would need to understand what you are trying to do. – jfriend00 Jun 25 '17 at 04:57
  • Hi, I wrote down exactly what I am trying to do in the following gist and updated the question. I hope this can provide more understanding about my reason for asking this question. https://gist.github.com/se4u/a492ca3ef327a7816362a8be0d02d403 – Pushpendre Jun 25 '17 at 05:34
  • @Pushpendre - I would think any of your choices could do what you're proposing (at the minimal level of detail you've provided). You are talking about a peak load of ~3 requests/sec for your 10k/hr. Depending upon how long it takes you to respond to a typical request, you may or may not need to involve more than one process (like clustering for node.js), but you can measure scalability and adapt when the time comes. What you describe there doesn't describe any requirements for event loop scheduling. – jfriend00 Jun 25 '17 at 05:51
  • Thanks again for reading what I posted!! and for helping me clear my head. Also I wanted to note that I kept my description minimal not really for hiding what I want to do but just because I was afraid nobody would read it if it was too long. – Pushpendre Jun 25 '17 at 06:29
  • @Pushpendre - FYI, there are no guarantees about latency in node.js and there really aren't with any web system running at near capacity. You can cluster node.js to take maximum advantage of the CPUs you have in your server and, if you design with all async I/O, that should max out your server horsepower which is pretty much the best any server can do. In the end, if you ask any server to do more than it has the processing power to handle, you will get latency as the work piles up. – jfriend00 Jun 25 '17 at 07:31
  • @Pushpendre - With a design using pretty much any technology, you scale your system to offer appropriate performance at peak load. With dynamic scaling services like AWS offers, you can even design auto-scaling as needed. No technology offers low latency at infinite load. – jfriend00 Jun 25 '17 at 07:33