9

Say I have a two dimensional array: vectors[x][y], and the initial array structure looks like this:

vectors = [    
 [0, 0, 0, 0, 0,],
 [0, 0, 0, 0, 0,],
 [0, 0, 0, 0, 0,],
 [0, 0, 0, 0, 0,],
 [0, 0, 0, 0, 0,]
]

After some calculations, the data in the array is randomized. What is the fastest way and most efficient way to return the array to it's initial state?

I know that I could just hardcode the above zeroed array and set vectors equal to it again, but I also know that an algorithm such as:

for (var x = 0; x < vectors.length; x++) {
    for (var y = 0; y < vectors[x].length; y++) {
        vectors[x][y] = 0;
    }

}

is O(x * y).

So which is the better way? And is there a better, even faster/more efficient way to solve this?

And for the general case of zeroing a multi-dimensional array of any length, which is the best way? (I'm working in JavaScript if it matters)

chrisdotcode
  • 1,569
  • 2
  • 17
  • 22
  • 1
    possibly related: http://stackoverflow.com/questions/1295584/most-efficient-way-to-create-a-zero-filled-javascript-array – lc. Nov 07 '12 at 18:29
  • Have you tried both approaches and profiled each of them? – Dai Nov 07 '12 at 18:30
  • @lc. I'll always know the length of the array(s) beforehand, and they're be initialized by hand. For the particular case of my question, the array is a 5x5. – chrisdotcode Nov 07 '12 at 18:31
  • 1
    Why are we worried about the Big-O performance for a 5x5 array? You could unroll the entire nested loop I suppose. – A. Webb Nov 07 '12 at 18:35
  • @A.Webb: because of the pure theoretical interest! (which is why I also asked for the general case) – chrisdotcode Nov 07 '12 at 18:36
  • Do you need to create a new one, reset the arrays on the first level, or null all values in the existing arrays? Would be a huge functional difference for code that [still] references the current object – Bergi Nov 07 '12 at 19:57
  • @Bergi: Doesn't matter. Whichever can get from a 5x5 randomized dataset to the zeroed out one above the most efficiently. – chrisdotcode Nov 07 '12 at 20:01

4 Answers4

3

Here is my two cents:

I'd go with keeping a clean copy of your original array for fastest performance. You can either keep a referenced hard-coded copy

var vectorsOrig = [    
 [0, 0, 0, 0, 0],
 [0, 0, 0, 0, 0],
 [0, 0, 0, 0, 0],
 [0, 0, 0, 0, 0],
 [0, 0, 0, 0, 0]
];

or do a dynamic clean clone of the initial array using slice ((recursively in your case for deep copy):

var clonedVectors = [0, 0, 0, 0, 0].slice(0);

Regardless, taking the approach of resetting your vector reference to an original copy will be faster than cycling through and resetting each node. If your old vector array object isn't referenced any more, JavaScript will garbage collect it.

With that said, the question becomes of obtaining a clean copy each and every time. Having once hard-coded instance will give you one clean copy and you'll have to clone it thereafter. Nor do you want to into dynamic generation via similar for loops as the reset option. My advice is to write a clone function that simply returns a new hard-coded or initialized array:

function newVector() {
    return [    
     [0, 0, 0, 0, 0],
     [0, 0, 0, 0, 0],
     [0, 0, 0, 0, 0],
     [0, 0, 0, 0, 0],
     [0, 0, 0, 0, 0]
    ];
}
var vector = newVector();
vector[1][2] = 11;
console.dir(vector);
vector = newVector();  // your old array will be garbage-collected if no longer referenced by any other reference
console.dir(vector);

Ideally, it's best to benchmark various approach.

EDIT Thanks to Vega's input, I've modified his test to test three approaches. In Chrome and IE9, this solution seems to be the fastest, in FF (15.0.1) manual iteration seems faster (memory allocation/management slower in FF possibly). http://jsperf.com/array-zero-test/2

cbayram
  • 2,259
  • 11
  • 9
  • Unfortunately, slice seems to be slower than updating the nodes http://jsperf.com/array-zero-test – Selvakumar Arumugam Nov 07 '12 at 19:19
  • This looks like Mohammad Goudarzi's solution, though you explain your rationale a bit better. – 0x1F602 Nov 07 '12 at 19:22
  • Depends on the environment (browser). For Chrome 22.0.1229.96 32-bit on Windows Server 2008 R2 / 7 64-bit, it's faster. – cbayram Nov 07 '12 at 19:30
  • Is brute actually the fastest solution? I added `map()` to the pref, and suddenly brute is faster there for me, but newcopy is faster for me in test 2. Weird. @cbayram, could it be because when you hit "beautify" code in test 2, jspref added an extra, blank to each array: `[0, 0, 0, 0, 0,],`? Check out http://jsperf.com/array-zero-test/3. **EDIT:** And now brute is performing better than newcopy in test 2 as well... Interesting... – chrisdotcode Nov 07 '12 at 20:12
  • @ChrisBlake, depends on the environment. I'll personally stick to the newCopy as the fastest in most implementations. `[0, 0, 0, 0, 0,]` should really be `[0, 0, 0, 0, 0]`, I missed that in the prev test. – cbayram Nov 07 '12 at 20:36
  • @MohammadGoudarzi: I also gave you an upvote to cancel out that nasty -1 ;-) – chrisdotcode Nov 07 '12 at 22:22
  • @ChrisBlake thank you. You are the man! but somebody else got the correct answer vote with about the same answer as mine. ;) – Rikki Nov 07 '12 at 22:26
1

So far, it sounds like we have 2 possible choices.

  1. Overwrite the whole thing with zeroes. (Your solution)

  2. Keep a record of all modified elements and only reset those ones. record[0].x = 3; record[0].y = 5; and so on However, you'll still need to loop once through the record. To explain it further, I mean that each time an element in the array is set to a value, you should record that element's placement in the array. Then, using a loop, you can visit each element and set it to 0. So, if you had a sparse array, it would be more efficient.

Depending on the implementation, I can see why you would want #2 instead of #1...but if you seriously have a large enough matrix that you need to worry about analyzing the algorithm, you might consider doing some kind of server pre-processing.

0x1F602
  • 865
  • 5
  • 22
  • I'm not particularly worried about who the data is going to be processed by. I was more intrigued as to which solution was the fastest in general. – chrisdotcode Nov 07 '12 at 18:42
  • I am willing to say that my second solution would perform in O(n) time where n is the number of modified elements. Worst-case performance being n = the entire array. – 0x1F602 Nov 07 '12 at 18:43
  • This looks like a particularly good solution, but I'm not quite sure I fully understand method #2. Could you reexplain it? – chrisdotcode Nov 07 '12 at 19:05
  • Thanks for the re-explanation. But how expensive is property access? `for (var i=0;i – chrisdotcode Nov 07 '12 at 19:52
  • In my algorithms course, you would say that memory access is 1 unit of time. So, to evaluate the code you just posted, I'd say var i = 0 is an assignment, 1 unit. i < records.length, a comparison 1 unit, i++, an addition 1 unit * records.length. records[i].x is one unit, records[i].y is one unit, and vectors[what][ever] is one unit in my mind. – 0x1F602 Nov 07 '12 at 19:57
0

Another different way of looking at the problem is to use a linear array and calculate the linear index from the x, y indices when you do your updates.

The initialization is then just a single for loop with time O(x+y)

HBP
  • 15,685
  • 6
  • 28
  • 34
  • 2
    Isn't this still O(x*y) because you still walk the entire array--even if it is represented as a single dimensional array? – 0x1F602 Nov 07 '12 at 18:57
-1

I'll risk and say that the fastest way of assigning the same value to all elements is by calling Array.map().

But, there is a catch here. Note that this will have incredibly fast performance on browsers that have natively implemented that method, and will have just the usual performance in other browsers. Also note that .map() isn't available in some old browsers, so you'll need to use Underscore.js or any other library that provides that method.

alexandernst
  • 14,352
  • 22
  • 97
  • 197