6

I often use Array.from() or [...foo] to obtain an array from iterable objects. I also can iterate it and push to an array manually instead, but I'd prefer native one because it's much simple and I thought native implementation will be more efficient.

However, I found some benchmark result which shows those native way are slower.

https://jsperf.com/set-iterator-vs-foreach/4

I've also run tests with fewer (50) and more (10k) elements on Chrome and Firefox, but ends up a similar result.

https://jsfiddle.net/unarist/k0cu8wta/2/

I can understand [...foo] is faster than Array.from() because Array.from() have to handle array-like objects and the mapFn parameter, but I couldn't find a plausible reason about between [...foo] and for..of+push() way.

What makes for..of+push() faster than Array.from() and [...foo]?

Update: My concern is not how fast nor which should I use. I was surprised about native version is slower than JS version and I wanted to know why, because I thought "generally, native one is fast".

(e.g. native one does more work than loop+push way, special optimization for something, etc.)

I've tested on Chrome 60 and Firefox 54 on Windows 10 x64.

unarist
  • 616
  • 8
  • 26
  • 2
    This is not uncommon, most of the time regular loops are faster than native methods in javascript. It's probably because the native methods are sturdier, accept more datatypes, have some degree of error handling when building the new array etc. while a regular loop is a very simple concept that has none of the above. – adeneo May 14 '17 at 15:23
  • Looks like Chrome did optimise `for … of` on arrays (but not inside `Array.from`). In Chrome 50, they were equally slow. – Bergi May 14 '17 at 15:51
  • Notice that `[...foo]` is *exactly the same* as `Array.from(foo)` (apart from being less explicit/readable) and needs to handle array-like objects as well. – Bergi May 14 '17 at 15:53
  • 2
    Don't trust microbenchmarks. Why? I made a test and implemented my own version of `Array.from()` and it turned out to be about 8-10 times faster than your iterator implementation, although i used an almost identical implementation. WTF? And as I changed the list to `new Set([...node.childNodes])`, the numbers completely changed, and my code seemed to get even faster ???? I think that the JS engine managed to optimize/reduce the code in a way that there was pretty much no code/work left. Like why should I do the work if the result is discarded anyway? – Thomas May 14 '17 at 16:00
  • @Thomas You mean [add `new Set()` for all test cases](https://jsfiddle.net/unarist/k0cu8wta/3/), right? That makes sense and it makes the difference more smaller, but `[...foo]` version is still slower than `for..of`+`push()` version (17% slower -> 10% slower on my environment). – unarist May 14 '17 at 16:37
  • @unarist No, it's not about `new Set`, but that you need to do something with the `result` that is an observable side effect, such as populating a global variable. Otherwise the compiler might be clever enough to optimise the `result.push(…)` call away completely. – Bergi May 14 '17 at 20:41
  • 1
    The problem is that you're trying to benchmark something that takes pretty much no time to execute, so you run it in a loop to get at least some measurable result. But the JS engines start to get that smart to detect (to some extent) dead code *(code that does nothing)* and eliminate it. – Thomas May 14 '17 at 22:27
  • 1
    The Engine optimizes this particular benchmark. Your conclusions basically only apply for this particular benchmark, not for the function/operation in general. You may be able to see some basic trend, but even that only applies to the JS engine you've used and could change as soon as the next bowser version. What I'm saying is, that such microbenchmarks ain't worth the effort, because the results are pretty much meaningless. If you're curions about performance optimization, read up what to avoid, what practices would prevent optimization. Everything else is out of your hands/not your concern. – Thomas May 14 '17 at 22:48
  • Regarding your surprise, an explanation can be found in the similar [Why is native javascript array forEach method significantly slower than the standard for loop?](http://stackoverflow.com/q/22155280/1048572) – Bergi May 15 '17 at 01:22
  • Or should I just close this as a duplicate of [Why most JavaScript native functions are slower than their naive implementations?](http://stackoverflow.com/q/21037888/1048572)? :-) – Bergi May 15 '17 at 01:22
  • 1
    Hmm, I've implemented `Array.from()` and `[...foo]` according to the spec. It's not fast as much as my previous non-native ways, but much faster than native `Array.from()` and `[...foo]`....interesting. https://jsfiddle.net/unarist/k0cu8wta/5/ – unarist May 15 '17 at 02:59
  • In latest Chrome spread and for..of perform on par. I suppose that's the answer to your question. Because native implementations often suck but tend to improve with time. `Array.from` implementation from `core-js` is much faster than the one from Chrome. *because I thought "generally, native one is fast"* - that's common misconception, Lodash had been faster than its native counterparts for quite long time. – Estus Flask Apr 26 '18 at 12:37
  • 1
    On my machine (Windows 10, Chrome 73), running your tests, `for...of` gets 94k ops/sec while `[...set]` and `Array.from(set)` gets ~250k ops/sec – yqlim Dec 27 '18 at 06:00

0 Answers0