The ones you listed (map
, forEach
) call functions. So compare
const updated = original.map(e => e * 2);
to
const updated = [];
let i; // (Declaring these here is a premature micro-optimization¹ for effect)
const len = original.length;
for (i = 0; i < len; ++i) {
updated[i] = original[i] * 2;
}
For a 100-entry array, the first example has all the overhead of the second example plus the overhead of creating the function, the overhead of the call to map
, and the overhead of 100 calls to the callback. Of course it's slower in absolute terms. That's provided the JavaScript engine can't optimize-away the calls, of course. If the callback is trivial, the engine can optimize it if it identifies it as a slow point in the code. scraaappy put together this benchmark, which for me at least shows Chrome and Firefox optimizing the map
to be faster than the for
, while Edge doesn't (which surprised me somewhat, the Chakra engine in Edge is very good). (I wouldn't be surprised if IE11 didn't either, but that benchmark site doesn't appear to work with IE11.)
Does it matter in practice? Almost never. Function calls in modern JavaScript engines are extremely fast.
Write what's clear to you (without being really silly). Optimize if and when you have a problem with performance. :-)
¹ What's the premature micro-optimization? If I'd declared i
in the for
, like this:
const updated = [];
for (let i = 0, len = original.length; i < len; ++i) {
updated[i] = original[i] * 2;
}
...a different i
gets created for each loop iteration (so closures created in the loop can close over each of them and not have the closures-in-loops problem).