0

I am developing a large javascript application and unsurprisingly in IE11 it really struggles (Chrome = 8 secs, nodejs= 8 secs, IE11 = 35 secs).

So I did some profiling and found that this method is my time sink. I have already made all the changes I could think of - is there any other performance improvement modification I can put in place?

const flatten = function(arr, result) {
  if (!Array.isArray(arr)) {
    return [arr];
  }

  if(!result){
    result = [];
  }

  for (let i = 0, length = arr.length; i < length; i++) {
      const value = arr[i];
      if (Array.isArray(value)) {
        flatten(value, result);
      } 
      else {
        result.push(value);
      }
  }

  return result;
  };

The method gets called lots of times, with smallish arrays (up to 10 string items, no more than 2 level deep).

Leo
  • 5,013
  • 1
  • 28
  • 65
  • 2
    Some JavaScript engines do `array.push(n)` faster than `array[array.length] = n`, others it's the other way around. You might check which is true for IE and use the one it finds faster. – T.J. Crowder Apr 18 '18 at 15:32
  • @T.J.Crowder - thank you for your comment. `push()` seems quicker in all of them (IE, Chrome and nodejs)... – Leo Apr 18 '18 at 15:40
  • Okay. (That Chrome and Node.js are the same isn't a surprise -- they use the same JavaScript engine, V8.) – T.J. Crowder Apr 18 '18 at 15:41

2 Answers2

3

Doing the if (!result) and Array.isArray(value) checks repeatedly should be avoided. I'd go for

function flatten(arr, result = []) {
  if (Array.isArray(arr)) {
    for (var i = 0; i < arr.length; i++) {
      flatten(arr[i], result);
    }
  } else {
    result.push(arr);
  }
  return result;
}

for simplicity and if the compiler doesn't optimise this enough by inlining and recognising loop patterns, I'd also try

function flatten(val) {
  if (Array.isArray(val)) // omit this check if you know that `flatten` is called with arrays only
    return flattenOnto(val, []);
  else
    return [val];
}
function flattenOnto(arr, result) {
  for (var i = 0, len = arr.length; i < len; i++) {
    var val = arr[i];
    if (Array.isArray(val))
      flattenOnto(val, result);
    else
      result.push(val);
  }
  return result;
}

I also used normal var instead of let because it had been known to be faster, dunno whether that has changed by now.

If, as you say, you also know that your arrays have a limited depth, you might even want to try to inline the recursive calls and spell it out to

function flatten(val) {
  if (!Array.isArray(val)) return [val]; // omit this check if you can
  var result = [];
  for (var i = 0, ilen = arr.length; i < ilen; i++) {
    var val = arr[i];
    if (Array.isArray(val)) {
      for (var j = 0, jlen = val.length; j < jlen; j++) {
        // as deep as you need it
        result.push(val[j]);
      }
    } else {
      result.push(val);
    }
  }
  return result;
}
Bergi
  • 630,263
  • 148
  • 957
  • 1,375
  • 1
    If trying to eke out every last bit of performance, may be worth caching `Array.isArray` to a local. – T.J. Crowder Apr 18 '18 at 15:44
  • Thank you for the answer. The first snippet provided (with default parameter - `result = []`) is how I had it originally, however `browserify`\`babelify` changes it to use a loop (as IE11 does not support default parameters) and it really did kill my performance... I will try with 2 methods as you suggested... – Leo Apr 18 '18 at 15:45
  • 1
    @LeonardoSeccia If you are using a transpiler, you probably should write performance-sensitive code in plain ES5 to have exact control over the code. – Bergi Apr 18 '18 at 15:46
  • @T.J.Crowder Good idea, though I really would have expected browsers to have optimised that by now. – Bergi Apr 18 '18 at 15:53
  • @Bergi - using the double method did help... Thank you. – Leo Apr 18 '18 at 16:05
  • 1
    @Bergi: Yeah, [doesn't seem to make much difference](https://jsperf.com/caching-array-isarray) (though it does make a small difference on IE11, which is the problematic one for the OP, so...). – T.J. Crowder Apr 18 '18 at 16:07
  • 1
    Bergi + @T.J.Crowder - thank you. 23% quicker with your suggestions: https://jsperf.com/flattenvflatten – Leo Apr 18 '18 at 16:32
  • @LeonardoSeccia Have you also profiled it in your real application? Only testing with real values can show the benefits. – Bergi Apr 18 '18 at 16:53
  • 1
    @Bergi - yes, unfortunately in the application I am not seeing much of a performance increase... But it is a little bit better, nevertheless... – Leo Apr 18 '18 at 16:59
-1

The way you use recursion looks a bit odd to me: you're both returning the array and mutating a parameter depending on the depth level. You also have duplicated Array.isArray(array) calls. I think this code can be quite simplified, for example to something like the following (no parameter mutation as you can see):

const flatten = (array) => Array.isArray(array)
  ? array.reduce((accumulated, value) => accumulated.concat(flatten(value)), [])
  : [array];

Not sure performances will be that improved though to be honest, but it looks more elegant in my opinion - jsPerf is your friend!

sp00m
  • 47,968
  • 31
  • 142
  • 252
  • 2
    Creating and throwing away lots of intermediate arrays with `concat` is not going to improve time performance. – T.J. Crowder Apr 18 '18 at 15:44
  • @sp00m - i gave you a +1 for trying to help, but jsPerf is your friend too... this is really killing performance... – Leo Apr 18 '18 at 15:49
  • @LeonardoSeccia Yeah, well, just wanted to share this tool with you in case you wanted to benchmark the solutions you received ;) – sp00m Apr 18 '18 at 15:52