5

I realized yesterday that most of the logging libraries for Node.js seem to use blocking/synchronous calls. Logging is normally an I/O operation, and with Node.js we are supposed to use non-blocking/asynchronous I/O everywhere possible.

console.log (process.stdout.write) has been a synchronous operation since Node.js 0.6, TMK

It occurred to me, for servers that do a lot of logging statements, to use blocking I/O for these might be a big performance penalty.

I ran "logging" statements with Redis, fs, Bunyan and Winston, and I get these results on a Macbook Pro:

redis: 16ms

fs-write-stream: 90ms

bunyan: 414ms

winston: 491ms

so it seems that just using a Redis client to send out a message over network I/O is the fastest way to get data out of the Node.js event loop.

Here are the tests:

    // fs
    var fs = require('fs');

    // redis
    var redis = require('redis');
    var client = redis.createClient();  //connect to local redis db

    // bunyan
    var bunyan = require('bunyan');

    var bunyanLogger = bunyan.createLogger({
        name: 'benchmark',
        streams: [
            {
                level: 'info',
                path: '../bunyan_log.txt'  // log ERROR and above to this file
            }
        ]
    });

    // winston
    var winston = require('winston');

    var winstonLogger = new (winston.Logger)({
        transports: [
            new (winston.transports.File)({ filename: '../winston_log.txt' })
        ]
    });


    ////////////////////////////////////////////////////////////////////////////

    console.time('redis-time');

    for (var i = 0; i < 12000; i++) {

        client.set('key' + i, 'value' + i + 'aegeggaiojigfeijoagreoiraegioagrijogerawijogerwijogerijoegwoijegwijoegwjio');

    }

    console.timeEnd('redis-time');

    ////////////////////////////////////////////////////////////////////


    console.time('fs-write-stream-time');

    var wstream = fs.createWriteStream('../fs_log.txt');

    for (var i = 0; i < 12000; i++) {

        wstream.write('key' + i + 'value' + i + 'aegeggaiojigfeijoagreoiraegioagrijogerawijogerwijogerijoegwoijegwijoegwjio' + '\n');

    }

    wstream.end();

    console.timeEnd('fs-write-stream-time');


    ///////////////////////////////////////////////////////////////


    console.time('bunyan-time');

    for (var i = 0; i < 12000; i++) {

        bunyanLogger.info('bunyan' + i);

    }

    console.timeEnd('bunyan-time');


    /////////////////////////////////////////////////////////////


    console.time('winston-time');

    for (var i = 0; i < 12000; i++) {

        winstonLogger.info('bunyan' + i);

    }

    console.timeEnd('winston-time');


////////////////////////////////////////////////////////////////

am I on to something or do I have something wrong?

Ideally it seems with Node.js servers you would use Redis to send logging requests to a logging server somewhere, which would process a queue.

Alexander Mills
  • 90,741
  • 139
  • 482
  • 817
  • I also ran it to syslog to /dev/null and got "syslog-time: 54ms", when I logged bunyan to /dev/null and winston to /dev/null I got the same results – Alexander Mills Nov 03 '15 at 21:32

1 Answers1

0

I just realized I made a mistake in the benchmarking - I need to wrap the Redis call in an on('ready') callback like so:

client.on('ready',function(){

    console.time('redis-time');

    for (var i = 0; i < 12000; i++) {

        client.set('key' + i, 'value' + i + 'aegeggaiojigfeijoagreoiraegioagrijogerawijogerwijogerijoegwoijegwijoegwjio');

    }

    console.timeEnd('redis-time');

});

after making that change, Redis is actually much slower than fs.createWriteStream, but still about twice as fast as Bunyan and Winston, probably because it's not calling JSON.stringify on the input.

The bottom lime is that fs.createWriteStream is waaay faster than Bunyan or Winston...but I am not sure if it matters that much for very small I/O ops.

Alexander Mills
  • 90,741
  • 139
  • 482
  • 817
  • 1
    That sounds strange. Try to run whole set of test mulitple times in a row and then calculate medians for every logger – matus Feb 05 '16 at 16:33
  • try it, copy the code with the new redis on ready handler in place of the original and you should see the same results – Alexander Mills Feb 05 '16 at 18:03
  • Don't you think the type of streams you are using also play a major role, as file IO maybe a cause the higher overhead of why `bunyan` and `winston` are take more time. – JohnTheBeloved Jul 24 '17 at 12:04
  • what are the different kinds of streams? they are all writable streams, but beyond that, not sure what difference there would be – Alexander Mills Jul 24 '17 at 21:54
  • my hypothesis was Bunyan and Winston are slower "because they are calling JSON.stringify" – Alexander Mills Jul 24 '17 at 21:55
  • 1
    I mean't the time it would take to write to a File IO might be more different from the time taken for a StdOut IO or a Network IO – JohnTheBeloved Aug 01 '17 at 20:30