3

I'd like to elaborate this question a bit. The question was, if the access time of the object is constant, i.e. independent on amount of attributes. The answer was, that it indeed is constant.

I've made a quick test on jsFiddle which however (at least for Chrome) showed, that there is quite significant difference (~three-fold) between access time of object with 10 attributes and object with 100k attributes.

So my first question is. Is the access time really constant and the difference is given by imprecision of the measurement due to memory management or something related to initialization and not the accessing?

Then thg435 created a benchmark, in which it is easy to see the differences between access times of big and small object in various browsers, so e.g. IE and Safari shows no difference between big and small objects (both are basically very slow), while Firefox and even more Opera and Chrome show huge difference in access times. But also it shows incredible differences between speeds of the browsers (e.g. IE ~50-100 times slower than Firefox or Chrome).

My second question is then - can these results be trusted? Or maybe some optimization (which won't make this big difference in real JS codes) kicks in degrading the results significantly? How to make a benchmark of accessing variables, which is fair? I've made second benchmark in order to avoid the optimization by not only accessing the variable, but also using its value (at the cost of measuring also numeric operations), which reduces otherwise insane speed of Chrome and Opera, but still the speed of IE is like 100x slower. I kind of know, that IE is slow. But being this much slower sounds suspicious to me.

Updated: I've added a third benchmark which aims to address the complexity better.

Footnote: it doesn't allow me post the question with the link to jsfiddle without the code, so here is the testing code.

function go(amount) {
  var object1 = {};
  for (var i = 0; i < amount; i++) {
    object1['id' + i] = i;
  }

  var start = new Date().getTime();
  var j = 0;
  for (var i = 0; i < 100000000; i++) {
    j += object1['id3'];
  }
  var end = new Date().getTime();
  console.log(j);
  document.getElementById('result').innerHTML = end - start;
}
Community
  • 1
  • 1
Erlik
  • 658
  • 8
  • 24
  • 1
    I'd say that `j +=` dirties your results a little - but even if we remove it, accessing times for small vs big objects are noticeable. – eithed Jan 28 '14 at 10:28
  • I agree. I just wanted to be sure, that something like "the accessed variable is never used, so browser will not even access it" is not happening. – Erlik Jan 28 '14 at 10:31
  • For non positional/sequential keys - such as strings - it instead uses the object-hash to look up the index-key in another, internal table, something like: `[hash($obj) => 234]` – Christoffer Bubach Mar 29 '20 at 16:05

1 Answers1

2

Essentially the objects in JavaScript can be thought of as hash maps (a list/array of keys paired with a value).

Several different implementations exist to make such and there are no restrictions in the standards of how the underlying implementation should work. You are allowed to make a simple array searching for the key in each index.

Imagine someone actually used the array for implementing the map as sketched above, the insertion can be done pretty fast - simply push the pair in the end of the array. Deletion would also be fairly fast (assuming you have the index of the pair at hand). But finding the right index will most likely be linear to the number of items (pairs) in the array. Finding the index is a requirement for object value lookup in this model.

JS engines would likely use some kind of search tree structure in order to optimize the lookup time - but the price is that insertion or deletion will suffer - or both.

A sane assumption is that lookups are performed much more often than insertion. Therefore a good search tree (most likely AVL based - http://en.wikipedia.org/wiki/AVL_tree) is implemented.

Different search trees have different properties regarding speed of insertion, deletion and lookup. Another property is if the insertion order is preserved. In the "silly array implementation" sketched above, the insertion order is preserved, while in many search trees this is not the case. Not stated in the ECMA standard you cannot rely on the order of the keys (some tree implementations may even reorder when keys are inserted, deleted and even looked up!). However, some browsers indicate that insertion order is preserved. Just, do not trust this to be the case in the future or on other browser platforms.

In other words: You can not assume any O(1) run time for lookup, insertion or deletion. Most likely you will see something in line of O(log N), but it may vary between JS engines.

hammer
  • 71
  • 2
  • Good point, I've updated the question. Added third benchmark, which could show O(1)/*O(log N) distinction. – Erlik Feb 01 '14 at 10:39