8

Consider the below snippet, which converts an array of objects to an array of numbers, with negative values filtered out, and then doubled by 2:

var objects = (new Array(400)).fill({
    value: Math.random() * 10 - 5
});

var positiveObjectValuesDoubled = objects.map(
    item => item.value
).filter(
    value => value > 0
).map(
    value => value * 2
);

When chained together like this, how many actual Array objects are created in total? 1, or 3? (excluding the initial objects array).

In particular, I'm talking about the intermediary Array objects created by filter, and then by the second map call in the chain: considering these array objects are not explicitly referenced per se, are Javascript runtimes smart enough to optimize where possible in this case, to use the same memory area?

If this cannot be answered with a clear yes-or-no, how could I determine this in various browsers? (to the best of my knowledge, the array constructor can no longer be overridden, so that's not an option)

John Weisz
  • 30,137
  • 13
  • 89
  • 132
  • 3
    The arrays will likely be created; optimising this away into essentially stream processing is probably more complex that a Javascript engine would do. The *values* inside the arrays will likely be interned/referenced, so you only have the overhead of the array structure itself, not necessarily of the values it contains. Also, the array created by the first `map` can be garbage collected by the time the `filter` is done with it, so it's hard to say how many arrays will be in memory at once. – deceze Sep 09 '16 at 09:16
  • 1
    This is beside the question, but FWIW, I'd implement this as one `reduce` operation instead of a chain of three operations; this is guaranteed to be more efficient from the number of array iterations alone. – deceze Sep 09 '16 at 09:24
  • @deceze *"I'd implement this as one reduce operation instead of a chain of three operations"* -- by all means, but this is more of a theoretical question. As in, *which additional approaches are possible in high-performance code, where profiling does not immediately yield an obvious answer*. The readability of ES5 array functions are exceptional, especially with ES6 arrow functions, hence our take on these guys. – John Weisz Sep 09 '16 at 09:41
  • Sure, hence just a note. It sounds to me like you'd like Javascript to optimise this into sort of a lazily processing functionally chained list; which it certainly won't by default. Some JS engines have multi-stage compilation in which special compilers will be invoked on *highly utilised code*; i.e. if the engine detects a certain block of code is called a lot, it will try to optimise the hell out of that in parallel. But whether you'll hit this condition and whether it actually does something can only be tested with an actual benchmark. You should start with fast code, not count on JS. – deceze Sep 09 '16 at 10:05
  • 1
    related: [Shortcut fusion optimization in javascript](http://stackoverflow.com/q/37204329/1048572) and [Do Immutable.js or Lazy.js perform short-cut fusion?](http://stackoverflow.com/q/27529486/1048572). But no, to my knowledge no JS engine does this natively, also because reordering impure functions is not allowed. – Bergi Sep 09 '16 at 11:28
  • 1
    "*use the same memory area?*" - that might be quite possible, and the thing that you really can test - just create multiple programs with different chain lengths and compare their memory usage on a large array. – Bergi Sep 09 '16 at 11:30
  • This is unanswerable in general. You'll just have to profile the heap size in dev tools, and with the understanding that the answer will change from browser to browser and within different versions of the same browser. – Jared Smith Feb 03 '17 at 12:33
  • As a basic rule: You can't change how things works internally, try to optimize externally. Do `objects.map(item => item.value * 2).filter(value => value > 0);` – Kulvar Feb 10 '17 at 10:55

1 Answers1

1

Good commentary so far, here's a summary answer: an engine might optimize for memory usage across chained method calls, but you should never count on an engine to do optimization for you.

As your example of chained methods is evaluated, the engine's memory heap is affected in the same order, step by step (MDN documentation on the event loop). But, how this works can depend on the engine...for some Array.map() might create a new array and garbage collect the old one before the next message executes, it might leave the old one hanging around until the space is needed again, it might change an array in place, whatever. The rabbithole for understanding this is very deep.

Can you test it? Sometimes! jQuery or javascript to find memory usage of page, this Google documentation are good places to start. Or you can just look at speed with something like http://jsperf.com/ which might give you at least an idea of how space-expensive something might be. But you could also use that time doing straightforward optimization in your own code. Probably a better call.

Community
  • 1
  • 1
Peter Behr
  • 607
  • 1
  • 5
  • 16