Would that be cpu intensive considering the timers would run for 30 seconds and only execute simple database queries?
You don't want a timer to run every 30 seconds and run database queries when there's no disconnected sockets. That's kind of like "polling" which is rarely the most efficient way of doing things.
What would make more sense is to set a 30 second timer each time you get a socket disconnect. Put the timerID in the session object. If the user reconnects, upon that reconnect, you will find the session and find a timerID in there. Cancel that timer. If the user does not reconnect within 30 seconds, then the timer will fire and you can clear the session.
Timers themselves don't consume CPU so it's no big deal to have a bunch of them. The node.js timer system is very efficient. It keeps track of when the next timer should fire and that is the only one that is technically active and set for a system timer. When that timer fires, it sets a system timer for the next timer that should fire. Timers are kept in a sorted data structure to make this easy for node.js.
So, the only CPU to be consumed here is a very tiny amount of housekeeping to organize a new timer and then whatever code you're going to run when the timer fires. If you were going to run that code anyway and it's just a matter of whether you run it now or 30 seconds from now, then it doesn't make any difference to your CPU usage when you run it. It will consume the same amount of total CPU either way.
So, if you want to set a 30 second timer when each socket disconnects, that's a perfectly fine thing to do and it should not be a noticeable impact on your CPU usage at all.
Here's a reference article that will help explain: How does Node.js manage timers internally and also this other answer: How many concurrent setTimeouts before performance issues?