[UPDATE The discussion of "slow mode" below also pointed me to Why the convertToFastObject function make it fast?, which has a nice discussion of the V8 internal discussion that @benjamin-gruenbaum referenced.]
I was wondering about the performance of JavaScript one for optimizing reads, so I wrote a JSPerf test suite, and the results are non-intuitive.
Firstly, the question "Performance of key lookup in JavaScript object" discusses the internals of V8s object creation, but I don't see why the method of creation would change the read performance after it has been created.
Overview
I want to test read access on various data structures. I chose 3 structures:
- An object literal, created and assigned in one statement
- An object literal, created in one statement, then values assigned in subsequent statements
- An array literal, created in one statement, but populated as an associative array, which presumably gets translated to an object in the background.
Test
http://jsperf.com/object-read-varies-by-creation-method
Methodology
In the setup method. I create 3 data structures and populate them with about random words from /usr/share/dict/words, in randomized order (but the same order on each test run). Each test case retrieves the same 10 random words from /usr/share/dict/words, but not necessarily words that are in the list, to simulate misses as well as hits.
Results
What I see on most browsers is pretty intuitive: the two objects perform better than the pseudo-associative array. My assumption is that the array literal is being proxied by an object in those cases, increasing the overall time it takes to access.
What I don't get is the Chrome 33 case, where creating and assigning an object as a single statement performs dramatically better than populating the structure after creation.
If the test included setup time, I could understand that difference, but as it stands, I don't fully get why the two object literals don't perform about the same.
The best guess I've come up with so far is that Chrome's object creation algorithm is able to optimize its hash tree better during an act of creation that includes all keys at once then it is when it's inserting the keys one at a time after the fact.
Can anyone more familiar with these structures account for the difference?