4

I'm trying to deploy an app to Heroku using Node.js and Socket.io with Redis. I have set up Socket.io to use XHR long polling as specified by Heroku, and it works perfectly if I only have one dyno, but it doesn't work when I scale it to use multiple dynos.

Initially I was using an MemoryStore in Socket.io, and when I scaled it up using "heroku ps:scale web=2", it started working intermittently, and giving this error in the client:

Uncaught TypeError: Property 'open' of object #<Transport> is not a function

I found in the Socket.io documentation that "if you want to scale to multiple process and / or multiple servers you can use our RedisStore which uses the Redis NoSQL database as man in the middle"

So, I created a RedisStore:

var newRedisStore = new RedisStore({
  redisPub : pub,
  redisSub : sub,
  redisClient : client
});

and configured Socket.io to use it:

//set up Web Socket Server
io.configure(function () { 
  io.set("transports", ["xhr-polling"]);
  io.set("polling duration", 10);
  io.set('store', newRedisStore);
});

And it all works perfectly locally and with one web dyno in Heroku. But as soon as I scale it to more than one process, it starts intermittently not working again, although now I don't get the error anymore. So, I'm not sure where to go from here.

These are the logs I'm getting from Heroku with 2 processes:

2012-06-16T15:36:12+00:00 app[web.2]: debug: setting poll timeout
2012-06-16T15:36:12+00:00 app[web.2]: debug: clearing poll timeout
2012-06-16T15:36:12+00:00 app[web.2]: debug: xhr-polling writing
7:::1+0 2012-06-16T15:36:12+00:00 app[web.2]: warn: client not
handshaken client should reconnect 2012-06-16T15:36:12+00:00
app[web.2]: debug: set close timeout for client 15718037491002932534
2012-06-16T15:36:12+00:00 app[web.2]: debug: cleared close timeout for
client 15718037491002932534 2012-06-16T15:36:12+00:00 app[web.2]:
info: transport end (error) 2012-06-16T15:36:12+00:00 app[web.2]:
debug: discarding transport

JOM
  • 8,139
  • 6
  • 78
  • 111
tomgersic
  • 401
  • 6
  • 12

2 Answers2

1

Did you try using Cluster module from Node? http://nodejs.org/api/cluster.html

Like:

var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  cluster.on('death', function(worker) {
    console.log('worker ' + worker.pid + ' died');
  });
} else {
  // Worker processes have a http server.
  http.Server(function(req, res) {
    res.writeHead(200);
    res.end("hello world\n");
  }).listen(8000);
}

Or:

var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
  // Fork workers.
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }
} else {
  var sio = require('socket.io')
  , RedisStore = sio.RedisStore
  , io = sio.listen(8080, options);

  // Somehow pass this information to the workers
  io.set('store', new RedisStore);

  // Do the work here
  io.sockets.on('connection', function (socket) {
    socket.on('chat', function (data) {
      socket.broadcast.emit('chat', data);
    })
  });
}

Like you can see here.

Community
  • 1
  • 1
Eugene Hauptmann
  • 1,255
  • 12
  • 16
  • Thanks. I feel like that's moving in the right direction, but I can't seem to get it working. Socket.io keeps erroring out on emit when I use cluster. – tomgersic Jun 16 '12 at 19:34
  • try to increase [timeout](https://devcenter.heroku.com/articles/request-timeout#longpolling_and_streaming_responses) and let me know. Also can you post your code, so I can analyze it, use pastebin.com or kind of. – Eugene Hauptmann Jun 16 '12 at 19:44
0

Scaling at heroku is done using heroku ps:scale {number}, you can't spawn processes there. I have the same issue right now.

https://github.com/LearnBoost/socket.io/issues/939

kof
  • 1