0

Consider a node.js loop to fire off http requests, like this:

var http = require('http');

function sendOne(opts, body)
{
    var post_options = {
...
    }

    var sender = http.request(post_options, function(res) {
        // process response
    });

    sender.write(body)
    sender.end()
    return sender;
}

for( var i =0; i < maxUploadIterations; i++)
{
    var body = // construct payload
    var sent = sendOne(opts, body);
    // somehow wait on sent...
}

Note that the http.request object has a callback function specified for handling the response. My question is how do I synchronously wait on the "Sent" object returned from sendOne using the built in Node primitives.

I understand there are multiple frameworks like Express and Futures that can handle this, but I want to understand the primitive behavior rather than using a "magic" framework.

Is it possible to take "Sent" and put a handler on it like this:

var sent = ...
sent.on("complete", function() { ... } );

?

If so, exactly which handler and how to format the loop with it?

Brad
  • 3,190
  • 1
  • 22
  • 36
  • 1
    One thing you could do is wrap it in a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise). I know you said that you don't want a framework, but Promises are part of ES6 and are a part of node.js now. – zero298 Feb 10 '16 at 04:13
  • I agree with @zero298 but you could also take a look at the source code of [sync-request](https://github.com/ForbesLindesay/sync-request) mentioned in [this](http://stackoverflow.com/a/32961480) question. – Grant Hames-Morgan Feb 10 '16 at 04:21
  • Do you really want to make synchronous requests or do you want to know how to wait for asynchronous operations? They are not the same thing. – slebetman Feb 10 '16 at 04:33
  • I want to know how to wait for asynchronous operations. Ideally I want to know how to do a synchronous wait on an arbitrary collection of async operations. – Brad Feb 10 '16 at 04:39
  • Also, I want to be able to vary the contents of that collection. So I don't necessarily want to fire all and wait. I want to be able to fire N and wait. Then fire N+1...K and wait, etc. Therefore I'm trying to understand the primitives node provides for synchronization and how to use that to build iterative structures. – Brad Feb 10 '16 at 04:46
  • Put simply, you CAN'T synchronously wait on an http request. Node.js network I/O is purposely asynchronous - it is not synchronous. You WILL use callbacks or promises or streams or some other async notification scheme to know when a networking operation has completed. – jfriend00 Feb 10 '16 at 05:07
  • @jfriend00 - I understand that. But what isn't clear is the most basic, most rudimentary, built-in, native notification scheme node provides without extra libraries (in node 4.1 and/or 4.3 which is what is available in my environment.) E.g. the sync-request library seems to do its magic by spawning an extra process. It's looking like the answer is "callback functions", but they result in awkward structures like the one in Keith's answer, though that seems to be the most general answer to the question as I'm posing it. Great for "fire a bunch and exit", but lacks fine control for next steps – Brad Feb 10 '16 at 19:13
  • I/O programming of any kind in node.js involves callbacks - ALWAYS. If you don't want that go get a different environment. To program in synchronous I/O with any possibility of server scale, you will need a multi-threaded environment (like perhaps one of the Java frameworks). That is simply NOT node.js. So you will need to learn node.js for what it is and how you program it OR pick a different environment that works differently. Promises still use callbacks, but give you a lot more structural help with async programming. If you persist, you will learn it and then it will feel natural. – jfriend00 Feb 10 '16 at 19:19

1 Answers1

1

Another option, use old fashion callbacks... Though promises may be cleaner.

var http = require('http');

function sendOne(opts, body, next) {
    var post_options = {
        // ...
    };

    var sender = http.request(post_options, function (res) {
        // process response
        next();
    });

    sender.write(body);
    sender.end();
}

var maxUploadIterations = 100;
function next(index) {
    // termination condition
    if (index >= maxUploadIterations) {
        return;
    }

    // setup next callback
    var nextCallback = next.bind(null, index + 1);

    // get these from wherever
    var opts = {};
    var body = {};
    sendOne(opts, body, nextCallback);
}

next(0);

If you have more work to do afterwards, you can change the termination condition to call something else, rather than return.

Kevin Burdett
  • 2,892
  • 1
  • 12
  • 19
  • I think that's close to meeting my requirements. Looks like I'm going to wind up with a call stack of "next" that is maxUploadIterations deep. Will node clean up the tail recursion, or do I need to be aware of this when maxUploadIterations gets really large? – Brad Feb 10 '16 at 04:36
  • I don't know too much about tall call optimizations in ES6, my guess is you would need to flatten this to a single function for it to detect the tail recursion. I'm not sure how smart it is... This wouldn't be hard though, I split the function largely for readability. – Kevin Burdett Feb 10 '16 at 04:42
  • 1
    @Brad I ran a quick test using the NPM stack-trace module (https://www.npmjs.com/package/stack-trace) with NodeJS 5.5.0. It appears to stay locked at a comfortable depth of 3, so either NodeJS is using tail call optimization, or it is an artifact of the work scheduler. I used `setTimeout` in place of `http.request` for testing. It may behave differently, though I suspect that anything scheduling asynchronous work will function the same. You'll have to test it for yourself in your environment. – Kevin Burdett Feb 10 '16 at 05:07