1

I would like to drive a setTimeout loop based on the time of the server, seconds only.

So when the seconds on the server is 30, some function is run on all clients. It is vital that they do not become totally out of sync with the server, as there will be a CRON job running on the server every, say, 45 seconds which is important to the system functioning correctly.

It is ok if a client is out by a few seconds with the server, client do not need to be synced to each other.

I am using Jquery library.

Cœur
  • 37,241
  • 25
  • 195
  • 267
imperium2335
  • 23,402
  • 38
  • 111
  • 190
  • Nothing yet, but I know that I don't want to have to have the client asking the server for its seconds time every second, as that will put to much overhead on the client? Or does the client only need to ask for the time every minute and run the scripts off what it thinks the time must be until the next re-sync? – imperium2335 Apr 08 '12 at 11:21
  • Never going to be exact since the data has to travel between server and client. – epascarello Apr 08 '12 at 12:50

4 Answers4

1

Using node.js and Socket.io

var io = require('socket.io').listen(80);

io.sockets.on('connection', tick);

function tick(socket){
    socket.emit("tick");
    setTimeout(function(){tick(socket)}, 30000);
}

Client:

var socket = io.connect('http://serveraddr');
socket.on('tick', function (data) {
    doSomething()
});

This uses Socket.io for the server to directly control the client.

pbfy0
  • 616
  • 4
  • 13
  • I didn't think JS supported sockets. This is for a kind of chat application, so I think sockets would be ideal as it can just mimic how JAVA sockets work in JAVA based chatrooms. Will this give the same kind of behaviour? Does your first script with socket.emit run on the server? and the other run on the client? I am developing a chat project of mine, and I am trying to address the issue of users pulling the plug on their connection etc as to not end up with a system full of ghost users. – imperium2335 Apr 08 '12 at 12:14
  • @imperium2335, newer browsers do, do some research. – epascarello Apr 08 '12 at 12:51
  • I've looked into it and it seems perfect for what I want, but can't figure out how to install this (I'm using windows)? – imperium2335 Apr 08 '12 at 15:00
  • Download node.js [here](http://nodejs.org/#download). After that, use `npm install socket.io` in a terminal (cmd) – pbfy0 Apr 08 '12 at 15:27
  • And yes, the first does run on the server, the second on the client. – pbfy0 Apr 08 '12 at 23:42
  • I've got it working and this is absolutely perfect for my application, so thanks! – imperium2335 Apr 09 '12 at 07:45
0

You can use long polling in this case. Client sends request to the server, and server sends response after some time. On callback you can execute the code.

Community
  • 1
  • 1
Vitalii Petrychuk
  • 14,035
  • 8
  • 51
  • 55
0

When client first loads, it asks for server time. Or you can just render current server time into the JS.

Then client gets current local time, calculates delta between the two and uses that delta to calculate current server time at any given moment. This approach assumes, of course, that the two clocks don't diverge too much (they really shouldn't).

Sergio Tulentsev
  • 226,338
  • 43
  • 373
  • 367
0

Depended on your solution (AJAX, page reload, etc.), you can just pass the amount of seconds to the clients when the next call should be done. This could be included in each response to clients' calls.

E.g. client calls the server, server answers 56s and the client performs the next call in 56 seconds.

diewie
  • 754
  • 4
  • 8
  • Would a type of offset be needed i.e. the latency between receiving and sending the request? So server sent at 56, but really the client gets 56, 3 seconds later, so any timeouts/intervals need to be adjusted on the client to compensate for the max latency. – imperium2335 Apr 08 '12 at 12:16
  • For not time-critical applications, this approach ensures that the client does not perform the call earlier than expected. The latency won't sum up, since the client gets a new _corrected_ timeout with every call. A real "synchronization" with measured latency is a bit complex, since you cannot identify whether the bottleneck is the connection or the processing on the client or server. For a rough estimation, you can measure, e.g., the time at the start of an AJAX request and the time when it is finished. – diewie Apr 08 '12 at 12:35