4

I have a server built on node.js. Below is one of the request handler functions:

var exec = require("child_process").exec

function doIt(response) {

    //some trivial and fast code - can be ignored

    exec(
        "sleep 10",  //run OS' sleep command, sleep for 10 seconds
        //sleeping(10), //commented out. run a local function, defined below.
        function(error, stdout, stderr) {
            response.writeHead(200, {"Content-Type": "text/plain"});
            response.write(stdout);
            response.end();
    });

    //some trivial and fast code - can be ignored
}

Meanwhile, in the same module file there is a local function "sleeping" defined, which as its name indicates will sleep for 10 seconds.

function sleeping(sec) {
    var begin = new Date().getTime();
    while (new Date().getTime() < begin + sec*1000); //just loop till timeup.
}

Here come three questions --

  1. As we know, node.js is single-processed, asynchronous, event-driven. Is it true that ALL functions with a callback argument is asynchronous? For example, if I have a function my_func(callback_func), which takes another function as an argument. Are there any restrictions on the callback_func or somewhere to make my_func asynchronous?

  2. So at least the child_process.exec is asynchronous with a callback anonymous function as argument. Here I pass "sleep 10" as the first argument, to call the OS's sleep command and wait for 10 seconds. It won't block the whole node process, i.e. any other request sent to another request handler won't be blocked as long as 10 seconds by the "doIt" handler. However, if immediately another request is sent to the server and should be handled by the same "doIt" handler, will it have to wait till the previous "doIt" request ends?

  3. If I use the sleeping(10) function call (commented out) to replace the "sleep 10", I found it does block other requests till 10 seconds after. Could anyone explain why the difference?

Thanks a bunch!

-- update per request --

One comment says this question seemed duplicate to another one (How to promisify Node's child_process.exec and child_process.execFile functions with Bluebird?) that was asked one year after this one.. Well these are too different - this was asked for asynchronous in general with a specific buggy case, while that one was asking about the Promise object per se. Both the intent and use cases vary.

(If by any chance these are similar, shouldn't the newer one marked as duplicate to the older one?)

Community
  • 1
  • 1
Bruce
  • 1,608
  • 2
  • 17
  • 29
  • Possible duplicate of [How to promisify Node's child\_process.exec and child\_process.execFile functions with Bluebird?](http://stackoverflow.com/questions/30763496/how-to-promisify-nodes-child-process-exec-and-child-process-execfile-functions) – Eliran Malka Nov 02 '16 at 15:19
  • Oh gosh, could you please take a look at the timestamps when questions were posted? This was asked one year earlier. – Bruce Nov 02 '16 at 18:06
  • i don't think that matters as much as which question has more traffic (concluded by the stats - votes, views etc.), and so more likely to be found in a web search. think of the people :) – Eliran Malka Nov 02 '16 at 21:36
  • OK that makes sense. Would you mind telling me the biggest common part as for a duplicate? I have not visited this question for too long and they look quite differently. Maybe I can clarify better if they are indeed different. – Bruce Nov 02 '16 at 22:46

2 Answers2

4

First you can promisify the child_process.

const util = require('util');
const exec = util.promisify(require('child_process').exec);

async function lsExample() {
  const { stdout, stderr } = await exec('ls');
  if (stderr) {
    // handle error
    console.log('stderr:', stderr);
  }
  console.log('stdout:', stdout);

}
lsExample()

As an async function, lsExample returns a promise.

Run all promises in parallel with Promise.all([]).

Promise.all([lsExample(), otherFunctionExample()]);

If you need to wait on the promises to finish in parallel, await them.

await Promise.all([aPromise(), bPromise()]);

If you need the values from those promises

const [a, b] = await Promise.all([aPromise(), bPromise(])
shmck
  • 5,129
  • 4
  • 17
  • 29
3

1) No. For example .forEach is synchronous:

var lst = [1,2,3];
console.log("start")
lst.forEach(function(el) {
    console.log(el);
});
console.log("end")

Whether function is asynchronous or not it purely depends on the implementation - there are no restrictions. You can't know it a priori (you have to either test it or know how it is implemented or read and believe in documentation). There's even more, depending on arguments the function can be either asynchronous or synchronous or both.

2) No. Each request will spawn a separate "sleep" process.

3) That's because your sleeping function is a total mess - it is not sleep at all. What it does is it uses an infinite loop and checks for date (thus using 100% of CPU). Since node.js is single-threaded then it just blocks entire server - because it is synchronous. This is wrong, don't do this. Use setTimeout instead.

freakish
  • 54,167
  • 9
  • 132
  • 169
  • I like your answer. For (2), so if I understand correctly, the exec actually a new process based on the 1st argument? Or it is just a new thread? And for (3), using setTimeout seems itself asynchronous. Is there a way to test a synchronous local function in place? – Bruce Mar 26 '14 at 08:52
  • @Bruce (2) Yes, `exec` spawns a new subprocess (for current node.js process). There are no threads in node.js. (3) I'm not sure I understand the question: what do you mean by "test in place"?. Anyway there is no way to do synchronous sleep in Node.js. – freakish Mar 26 '14 at 08:57
  • Many thanks, freakish. I think your answer is good enough. I'll await a little bit and see some more comments before accepting it. – Bruce Mar 26 '14 at 09:36
  • Sorry, the last one is still a bit confusing. If I change exec's first argument as "find /" and set {timeout:20000, maxBuffer:20000*1024}, the new process spawned will still take up 100% CPU, however, it won't block the other requests sent in and handled asynchronous. According to the OS, a newly spawned process is a process anyway, so it should be up to the OS to schedule which core to run it. Even if it takes up 100% CPU on that core per se, it should not affect the other process, like the one spawned it, right? – Bruce Mar 26 '14 at 19:08
  • @Bruce That's correct. I was talking about `sleeping` function, not `sleep 10` subprocess. Note that you're not spawning a new process if you use `sleeping(10)`. The code is wrong in the sense that you call `sleeping(10)` and then you pass the result (which is `undefined`) to `exec` call. Thus it waits `10` seconds (blocking entire server) and the calls `exec` with `undefined` (which I think should fail, i.e. throw an exception?). – freakish Mar 26 '14 at 19:31
  • This makes great sense. Interestingly there is no exception thrown, even if I just directly pass the undefined as the first argument to exec. Anyway, it answered the question. Thanks! – Bruce Mar 26 '14 at 19:40
  • @Bruce Oh, that's very interesting. It should throw an exception IMHO (JavaScript sometimes sucks ;)). Anyway, I'm glad I could help. – freakish Mar 26 '14 at 19:52