0

I'm working on a project that does billions of requests per day, and there's a big concern about performance and memory usage.

Yesterday I implemented some changes in a code that executes lots and lots of times per minute, and I used arrow functions for some mapping, but the team asked me to always use argsThis when using .map, and I didn't get why, so I did a lot of benchmarks, and it shows the opposite.

Benchmark (Bind vs Arrow Function vs ArgsThis)

Their argument is that garbage collection is way worst when using Arrow Functions, and these cases are not showing a real scenario because it has a very shallow context.

enter image description here

The benchmarks shows that arrow function are much faster, is there something I'm missing to consider?
Thanks!

EDIT:

The question is for cases we need variables from the upper context, for example:

function doSomething() {
    const someUpperContextValue = 5;
    [1, 2, 3].map((currentValue) => calculateValues(someOutterContextValue, currentValue));
}
Pedro Kehl
  • 166
  • 3
  • 11
  • 1
    ["Which is faster" by Eric Lippert](https://ericlippert.com/2012/12/17/performance-rant/) – VLAZ Sep 17 '20 at 19:28
  • 3
    I'll throw some gas on the fire. Why do you need either? Both binding and inline arrow functions are going to be creating a new function every time. If you are so concerned with memory usage, why not reuse a single function, rather than making new ones? – Taplar Sep 17 '20 at 19:35
  • Good question Taplar, check the benchmark, the reason is explicit there. It's for cases we need variables from the upper context, not just what we have on the elements of the array. – Pedro Kehl Sep 17 '20 at 19:37
  • 1
    `Function.prototype.call` (or apply), vs bind, can be used to perform the same type of operation for changing the context of a function, without creating an entirely new instance of the function. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call – Taplar Sep 17 '20 at 19:39
  • The .call and .apply invoke the function immediately, how would it work with .map? – Pedro Kehl Sep 17 '20 at 19:48

1 Answers1

2

Disclaimer: just know that your less-than-real-world benchmarks likely don't show the full picture, but I won't get into that here.

We can take a very elementary look at how JavaScript engines call functions. Please note that I am not an expert on how JS runtimes work, so please correct me where I am wrong or incomplete.

Whenever a function is executed, a "call scope" is created and added to the stack. For normal/classical functions (non-arrow functions), the engine creates a new context which has its own "closure" and variable scope. Within this context are some implicitly created variables such as this and arguments which the engine has put there.

function foo() {
  const self = this;
  function bar(...args) {
    console.log(this === self); //-> false
    console.log(arguments); //-> { length: 1, 0: 'barg' }
    console.log(args); //-> [ 'barg' ]
  }
  bar('barg');
  console.log(arguments); //-> { length: 1, 0: 'farg' }
} 
foo('farg');

Arrow functions work very much like regular functions but without the additional closure and extra variables. Pay close attention to the difference in log results:

function foo() {
  const self = this;
  const bar = (...args) => {
    console.log(this === self); //-> true
    console.log(arguments); //-> { length: 1, 0: 'farg' }
    console.log(args); //-> [ 'barg' ]
  }
  bar('barg');
  console.log(arguments); //-> { length: 1, 0: 'farg' }
} 
foo('farg');

Armed with this very topical... almost superficial... knowledge, you can see that the engine is doing "less work" when creating arrow functions. Reason would stand that arrow functions are inherently faster because of this. Furthermore, I don't believe arrow functions introduce any more potential for memory leaks or garbage collection than regular functions (helpful link).

Edit:

It's worth mentioning that every time you declare a function, you're defining a new variable which is taking space in memory. The JavaScript engine must also preserve the "context" for each function such that variables defined in "parent" scopes are still available for the function when it executes. For example, any time you call foo above, a new bar variable is created in memory with access to the full context of its parent "foo" call scope. JS engines are good about figuring out when a context is no longer needed and will clean it up during garbage collection, but know that even garbage collection can be slow if there's a lot of trash.

Also, engines have an optimization layer (see Turbofan) which are really smart and constantly evolving. To illustrate this, consider the example you provided:

function doSomething() {
    const someUpperContextValue = 5;
    [1, 2, 3].map((currentValue) => calculateValues(someOutterContextValue, currentValue));
}

Without knowing much about the internals of JS engines, I could see engines optimizing the doSomething function because the someUpperContextValue variable has a static value of 5. If you change that value to something like Math.random(), now the engine doesn't know what the value will be and cannot optimize. For these reasons, many people will tell you you're wasting your time by asking "which is faster" because you never know when a small innocuous change completely kills your performance.

Ryan Wheale
  • 26,022
  • 8
  • 76
  • 96
  • Interesting thought process. I'm inclined to accept your answer. I believe you're right and it makes a lot sense, I'm gonna test this and stress this solution just to make sure. Thank you! – Pedro Kehl Sep 17 '20 at 20:19
  • 1
    The fallacy with benchmarks is that JS engines are constantly improving their ability to optimize code ahead of execution (read about v8's [Turbofan](https://v8.dev/docs/turbofan)). So what's slow today might be super fast tomorrow, and ever so often some fast code becomes slower - you never know. So take some time, run some tests, and see what's faster - but don't get too consumed with it. – Ryan Wheale Sep 17 '20 at 20:23
  • 1
    @RyanWheale not only is the optimisation a moving target, another problem with the benchmarks is that they rarely use real data. OP's benchmarks also fall in the same trap. The data is `new Array(100).fill(1);` and `new Array(1_000_000).fill({ id: "some_id", value: 0 })` which is completely artificial. It's monotonous data and searching it might exhibit different behaviour than what real data would. [See basic example of code run time being different based on data composition](https://stackoverflow.com/q/11227809) – VLAZ Sep 17 '20 at 20:56
  • @VLAZ what I'm comparing here, is simply arrow function vs other implementations and trying to understand why it has different results in performance. I totally agree with the philosophy of avoiding super optimizations early, but this is not the subject of the question. If you want to talk about this, the thing is, some devs are trying to force a code standard that does NOT have a good readability in exchange of a false "super optimization". (using argThis, the second parameter of map) The answers will help me argue against these "super optimizations" that adds extra code and complexity. – Pedro Kehl Sep 17 '20 at 22:03
  • @VLAZ I understand their side, they want to reduce memory usage and improve the performance, and the team is very focused on this, so just using readability as argument didn't work. So now I'm gonna play the game and see which is better in performance matters. Later I can try to convince that early super optimizations are not healthy to the project. – Pedro Kehl Sep 17 '20 at 22:07
  • @PedroKehl It's not *early* optimisation I was saying was the problem. It's *unrealistic benchmarks*. The data you are testing with is *not* representative of live data and *very likely* wouldn't have the same performance characteristics, either. So the benchmark might be completely off just based on that. Any conclusions you draw based on it could be completely false. *That* is the issue here. Checking it early is also an issue but it's just part of the problem. – VLAZ Sep 17 '20 at 22:08
  • @VLAZ (I'm enjoying this conversation) What would you do if someone says to you "Always use argsThis, because arrow functions have bad performance and garbage collector does not work well with it" ? First, with no benchmark yet, I mentioned code readability and possible issues with Type checking (TS is being used on this project). They kept pressing the same key, so I moved to understanding if it has a huge difference or not. This is the main goal of this question. – Pedro Kehl Sep 17 '20 at 22:13
  • 2
    @PedroKehl I'd ask them to 1. prove it 2. show the performance difference is relevant. The burden of proof is on the one making the statement, not the one receiving it. – VLAZ Sep 17 '20 at 22:15