1

Let's say I have 1000 concurrent socket.io connections. On disconnect, I would like to give at least 30 seconds for the user to re-connect and maintain its session, so I thought of adding a timer to do that. Would that be cpu intensive considering the timers would run for 30 seconds and only execute simple database queries? This is backend btw, so no browser.

Thanks

tosi
  • 611
  • 9
  • 24
  • totally depends on how you implement the timer. – kiddorails Jun 19 '18 at 04:40
  • No i dun think so,i would recommend you use cron job scheduler like node-schedule to close connections if inactive for more than 30 secs. – Marcia Ong Jun 19 '18 at 04:42
  • @MarciaOng So you're referring to using one timer that checks for every user if it has disconnected? – tosi Jun 19 '18 at 04:45
  • @gitterio Yes. Rather than creating a timer for each connection. Have a scheduler that execute a closeInactiveConnection function every minute(you can customize it). By doing so, it would use lesser RAM space and predictable execution load on the cpu server. Assume scheduler runs every minute, the server will only execute closeInactiveConnection 60 times an hour. – Marcia Ong Jun 19 '18 at 04:48
  • @gitterio The complexity of the algo would be O(n), n = minutes which have 0 growth factor regardless of the number of connections you have. It's better than having n = number of socket.io connections – Marcia Ong Jun 19 '18 at 04:56

1 Answers1

3

Would that be cpu intensive considering the timers would run for 30 seconds and only execute simple database queries?

You don't want a timer to run every 30 seconds and run database queries when there's no disconnected sockets. That's kind of like "polling" which is rarely the most efficient way of doing things.

What would make more sense is to set a 30 second timer each time you get a socket disconnect. Put the timerID in the session object. If the user reconnects, upon that reconnect, you will find the session and find a timerID in there. Cancel that timer. If the user does not reconnect within 30 seconds, then the timer will fire and you can clear the session.

Timers themselves don't consume CPU so it's no big deal to have a bunch of them. The node.js timer system is very efficient. It keeps track of when the next timer should fire and that is the only one that is technically active and set for a system timer. When that timer fires, it sets a system timer for the next timer that should fire. Timers are kept in a sorted data structure to make this easy for node.js.

So, the only CPU to be consumed here is a very tiny amount of housekeeping to organize a new timer and then whatever code you're going to run when the timer fires. If you were going to run that code anyway and it's just a matter of whether you run it now or 30 seconds from now, then it doesn't make any difference to your CPU usage when you run it. It will consume the same amount of total CPU either way.

So, if you want to set a 30 second timer when each socket disconnects, that's a perfectly fine thing to do and it should not be a noticeable impact on your CPU usage at all.

Here's a reference article that will help explain: How does Node.js manage timers internally and also this other answer: How many concurrent setTimeouts before performance issues?

jfriend00
  • 683,504
  • 96
  • 985
  • 979