Disclaimer: just know that your less-than-real-world benchmarks likely don't show the full picture, but I won't get into that here.
We can take a very elementary look at how JavaScript engines call functions. Please note that I am not an expert on how JS runtimes work, so please correct me where I am wrong or incomplete.
Whenever a function is executed, a "call scope" is created and added to the stack. For normal/classical functions (non-arrow functions), the engine creates a new context which has its own "closure" and variable scope. Within this context are some implicitly created variables such as this
and arguments
which the engine has put there.
function foo() {
const self = this;
function bar(...args) {
console.log(this === self); //-> false
console.log(arguments); //-> { length: 1, 0: 'barg' }
console.log(args); //-> [ 'barg' ]
}
bar('barg');
console.log(arguments); //-> { length: 1, 0: 'farg' }
}
foo('farg');
Arrow functions work very much like regular functions but without the additional closure and extra variables. Pay close attention to the difference in log results:
function foo() {
const self = this;
const bar = (...args) => {
console.log(this === self); //-> true
console.log(arguments); //-> { length: 1, 0: 'farg' }
console.log(args); //-> [ 'barg' ]
}
bar('barg');
console.log(arguments); //-> { length: 1, 0: 'farg' }
}
foo('farg');
Armed with this very topical... almost superficial... knowledge, you can see that the engine is doing "less work" when creating arrow functions. Reason would stand that arrow functions are inherently faster because of this. Furthermore, I don't believe arrow functions introduce any more potential for memory leaks or garbage collection than regular functions (helpful link).
Edit:
It's worth mentioning that every time you declare a function, you're defining a new variable which is taking space in memory. The JavaScript engine must also preserve the "context" for each function such that variables defined in "parent" scopes are still available for the function when it executes. For example, any time you call foo
above, a new bar
variable is created in memory with access to the full context of its parent "foo" call scope. JS engines are good about figuring out when a context is no longer needed and will clean it up during garbage collection, but know that even garbage collection can be slow if there's a lot of trash.
Also, engines have an optimization layer (see Turbofan) which are really smart and constantly evolving. To illustrate this, consider the example you provided:
function doSomething() {
const someUpperContextValue = 5;
[1, 2, 3].map((currentValue) => calculateValues(someOutterContextValue, currentValue));
}
Without knowing much about the internals of JS engines, I could see engines optimizing the doSomething
function because the someUpperContextValue
variable has a static value of 5
. If you change that value to something like Math.random()
, now the engine doesn't know what the value will be and cannot optimize. For these reasons, many people will tell you you're wasting your time by asking "which is faster" because you never know when a small innocuous change completely kills your performance.