0

A simple server (proxy), getting MEDIUM traffic, is oddly leaking.

The code is simply:

var net     = require('net');
var dgram   = require('dgram');

var server3 = net.createServer();
var udp_sv3 = dgram.createSocket('udp4');

var load_balancer = null;

udp_sv3.bind(9038);
udp_sv3.on('message', udpHandler);
server3.listen(2103);
server3.on('connection', connHandler);

function udpHandler(msg, sender) {
  if (!load_balancer && sender.size === 4) load_balancer = sender;
  if (load_balancer.address === sender.address) {
    this.send(msg, 0, msg.length, 9038, 'xxxxxx');
  } else {
    this.send(msg, 0, msg.length, load_balancer.port, load_balancer.address);
  }
  msg = null;
}

function connHandler(client) {
  var port = this.address().port;
  var gate = net.connect({ host: 'xxxxxx', port: port });

  gate.pipe(client).pipe(gate);

  client.setNoDelay();
  gate.setNoDelay();

  client.on('error', function (error) {});
  gate.on('error', function (error) {});
}

And that is it, but monitoring with pm2 is currently showing:

│ App name │ id │ mode │ PID   │ status │ restarted │ uptime │       memory │
├──────────┼────┼──────┼───────┼────────┼───────────┼────────┼──────────────┤
│ index    │ 0  │ fork │ 16043 │ online │         0 │ 2h     │ 977.125 MB   │

And the number of sockets currently connected is 267. Which is currently low because usually it would reach beyond 1000. But the point is its still leaking.

What is wrong?


  • 5 minutes after posting above and the memory is at 987.281MB with 261 sockets connected.
  • 3 minutes later: 993.676MB with 260 sockets connected.
  • 10 minutes later: 1.060GB with 248 sockets connected.

Is node not gc'ing?

majidarif
  • 18,694
  • 16
  • 88
  • 133
  • Do you know exactly what is being measured when pm2 reports memory usage? Some ways that memory info can be measured on some platforms ends up including temporary memory allocations (used for caches, code, etc...) that can be reclaimed by the system if needed and thus is not a good measure of a leak. Also, what platform are you running on? In other words are you 100% this is a actually a real leak? If you run it for days, does it eventually crash because of memory over-used? – jfriend00 Feb 15 '15 at 21:33
  • @jfriend00 I'm on ubuntu and using `top` shows the same amount of memory. Also when I reach almost the maximum memory the application restarts. (crashed probably). – majidarif Feb 15 '15 at 21:37
  • Then, probably time to do some heap snapshots and see what's using all the memory. Not an easy thing to sort through, but the only way I know of to find out what is using all the memory. – jfriend00 Feb 15 '15 at 21:40
  • Worth reading this: http://stackoverflow.com/questions/4802481/how-to-see-top-processes-by-actual-memory-usage – jfriend00 Feb 15 '15 at 21:42
  • @jfriend00 I did the profiling with memwatch and other tools but there just isn't any problem. – majidarif Feb 15 '15 at 22:00

1 Answers1

1

For these lines:

client.on('error', function (error) {});
gate.on('error', function (error) {});

You might try 1. deleting them as they appear to be allocating functions for no purpose or 2. Defining the no-op function just once at the program's top-level scope and binding the events to the single instance (or use something like _.noop from lodash). That's the only bit of code here that looks at all dubious to me.

Peter Lyons
  • 142,938
  • 30
  • 279
  • 274
  • Leak still exist. Moving-on to golang instead. – majidarif Feb 16 '15 at 04:16
  • OK good luck. For a proxy like this golang might be a better choice. If you want to stick with node you might see if iojs v1.2 works any better or nodejs v0.12. – Peter Lyons Feb 16 '15 at 04:58
  • Thanks. :) I am using v0.12, I have also tried the latest iojs. I'm using `nvm`. Both had the leaks. Have the golang server running now, `750` online sockets at `100MB`. which is a lot better than the numbers on my question. – majidarif Feb 16 '15 at 05:52