I think, what you want is:
function foo()
{
while (count < 10)
{
process.nextTick(doSometing);
count ++;
}
}
process.nextTick
will schedule the execution of doSometing
on the next tick of the event loop. So, instead of switching immediately to doSometing
this code will just schedule the execution and complete foo
first.
You may also try setTimeout(doSometing,0)
and setImmediate(doSometing)
. They'll allow I/O
calls to occur before doSometing
will be executed.
Passing arguments to doSomething
If you want to pass some parameters to doSomething
, then it's best to ensure they'll be encapsulated and won't change before doSomething
will be executed:
setTimeout(doSometing.bind(null,foo,bar),0);
In this case doSometing
will be called with correct arguments even if foo
and bar
will be changed or deleted. But this won't work in case if foo
is an object and you changes one of its properties.
What the alternatives are?
If you want doSomething
to be executed in parallel (not just asynchronous, but actually in parallel), then you may be interested in some job-processing solution. I recommend you to look at kickq:
var kickq = require('kickq');
kickq.process('some_job', function (jobItem, data, cb) {
doSomething(data);
cb();
});
// ...
function foo()
{
while (count < 10)
{
kickq.create('some_job', data);
count ++;
}
}
kickq.process
will create a separate process for processing your jobs. So, kickq.create
will just register the job to be processed.
kickq
uses redis to queue jobs and it won't work without it.
Using node.js build-in modules
Another alternative is building your own job-processor using Child Process. The resulting code may look something like this:
var fork = require('child_process').fork,
child = fork(__dirname + '/do-something.js');
// ...
function foo()
{
while (count < 10)
{
child.send(data);
count ++;
}
}
do-something.js
here is a separate .js
file with doSomething
logic:
process.on('message', doSomething);
The actual code may be more complicated.
Things you should be aware of
Node.js
is single-threaded, so it executes only one function at a time. It also can't utilize more then one CPU.
Node.js is asynchronous, so it's capable of processing multiple functions at once by switching between them. It's really efficient when dealing with functions with lots of I/O
calls, because it's newer blocks. So, when one function waits for the response from DB, another function is executed. But node.js
is not a good choice for blocking tasks with heavy CPU utilization.
It's possible to do real parallel calculations in node.js
using modules like child_process and cluster. child_process
allows you to start a new node.js
process. It also creates a communication channel between parent and child processes. Cluster allows you to run a cluster of identical processes. It's really handy when you're dealing with http
requests, because cluster
can distribute them randomly between workers. So, it's possible to create a cluster of workers processing your data in parallel, though generally node.js
is single-threaded.