I have checked each and every question and article about this, that I could find, but nothing really answers it. I make it sound as if I have just one question, there's more of them.
Maybe I should first explain what I am trying to do, I am building a File Manager entirely in JavaScript, the File Manager receives files from Backend (PHP, Twig), it then stores them in an array of Folder, each Folder has its own Files, Folders is an Array, and Files in each Folder is an Array as well, these Files are also shown on the page, and the user can select, copy, cut, paste ( other operations ) them ( I am still writing these operations, because here lies the problem ).
Files on page have data-id assigned to them, so that I can easily operate on them between File Manager and the Backend, and because I know the ID of every and each file, I think I could completely eliminate traversing through Arrays of Files in search of a particular File, if I could only create an Array that would take the ID of a File as an Index, and because this is JavaScript, I can do that! but there are some problems with it.
Trying to use an Object for this task just does not work, because it is just way slower than Array ( now, I understand, that the differences in speed, even if in millions, are not that big, but why not try to dig for ultimate performance, right ? )
So here is a list of questions, that I cannot seem to find a reliable answer to:
- Why is Object on Chrome so much faster, than Object on Firefox ?
- Why is accessing an undefined index on Chrome so much slower ?
- Why is Array so much faster on Firefox, than on Chrome ?
- I know why Chrome loses performance with higher indexes ( at 100k it transforms the array into a list, I think, it was answered with a link to source in another question on SO ), but Firefox loses performance gradually as if it traveresed to higher indexes sequentially, even if accessed directly, why is that ?
- Why is an Array extermely slow when there is only one element in it, but that element is at a very high index ?
There are probably some more questions I have, but I cannot think of them right now.
The fastest configuration I found, that supports IDs as indexes and holes between them, as well as starting at a high index, is using an Array that holds the data you want to store, and another array, that just holds the indexes, so that when you are searching for an object with ID 10, you touch the index Array, instead of the Array with data.
You can see an example of this in http://jsperf.com/array-management-performance/2.
EDIT: If you want to see the performance degradation, please "review" this jsperf and change minId and maxId to some big numebers.
Here are some stats:
Object
Read Defined: Firefox 165.173 mln ops/sec | Chrome 351.699 mln ops/sec
Read Undefined: Firefox 98.582 mln ops/sec | Chrome 54.666 mln ops/sec
Write Close: Firefox 7.599 mln ops/sec | Chrome 291.244 mln ops/sec
Write Far: Firefox 5.599 mln ops/sec | Chrome 93.733 mln ops/sec
Write Overwrite: Firefox 7.599 mln ops/sec | Chrome 291.244 mln ops/sec
Array
Read Defined: Firefox 681.206 mln ops/sec | Chrome 401.522 mln ops/sec
Read Undefined: Firefox 681.206 mln ops/sec | Chrome 62.827 mln ops/sec
Write Close: Firefox 400.234 mln ops/sec | Chrome 121.519 mln ops/sec
Write Far: Firefox 348.560 mln ops/sec | Chrome 121.519 mln ops/sec
Write Overwrite: Firefox 400.234 mln ops/sec | Chrome 234.337 mln ops/sec
P.S. Did you know, that Read Defined is faster on Chrome on Mobile, than on Chrome on Desktop ?
I jsperf-ed every single question I have here, so either my/others' tests were incorrectly written, or this is some really funky stuff.
http://jsperf.com/array-management-performance/2
http://jsperf.com/array-in-the-middle-of-nowhere
http://jsperf.com/object-holey-performance-better
http://jsperf.com/object-performance-better
http://jsperf.com/array-vs-object-mine-v2/5
P.S.2 I know, that one of the Array tests is actually testing Object, and another Object test is actually testing Array, please do not point that out, I am aware, I wrote this, the error is because jsperf is very poorly made, and I was trying to relatively quickly check a lot of different settings.
P.S.3 I am sorry, that some of these tests are really messy, I did not think, I would actually need to show them to anyone, but I still think there are sufficient.
---------------------------------------------------------------------------
EDIT: ( Answer, hopefully to re-open the question, so that I can answer it )
Here I go with an answer to all of our (my) problems. Now that I read my questions again, while having the answer, I can see how my questions could not be answered easily, or at all really, without digging into the sources of each browser.
I myself still do not know the exact answers to 1, 2, 3, 4 or even 5, but the answer that is correct, is "Because of implementation differences", it is as simple as that, obviously, I have not returned here to write just that, I have a solution to the problem I am showing in this question.
Simplified Problem: You have a set of IDs, that are assigned to data, most likely from the DB. You want that data in an array in JS, so that You can work with it very easily, but You DO NOT WANT to lose performance, while putting 100k elements into a JS array. How does one put N elements into an array in JS without losing read/write performance ? (I did not test write, because it is not that important to me).
Solution and Explanation: While studying this problem, I have checked a mutlitude of potential solutions, like splines, fourier transformations, hash tables, binary search, and a twist on map/hash tables, that actually performed VERY well, but not well enough for my taste ( it would also perform worse with the increase in size ).
The final solution seems dead simple and obvious, but somehow ( I guess I'm dumb ) it took me 10 days to figure out, it scales perfectly, and access to any element is extremely fast O(1).
To be able to write this solution, I use this property of JS engines http://jsperf.com/array-of-arrays-with-high-indexes/2 , it would seem, that access to first 1000 elements IS SUPER FAST ( even in arrays of arrays ), and because I know my data is unsigned integers, I can easily map the integers from 1D to 3D, so that FastArray can hold up to a Billion elements.
You can see this solution in action right here: http://jsperf.com/map-check-v2/5 -- The first test is slow, because I read the sample data from a simple array, and that is very slow ... EDIT: This does not seem to be the case, I really do not know what is causing it, I think jsperf is kind of freaking out, because running just that test yields pretty good results, very weird.
A cleaner implementation is here https://github.com/ImJustAskingDude/JavascriptFastArray with a lot of references to the every page I read, to try to figure this out.
P.S.4 Moderators. please re-open this question, so that people searching for Ultimate Array Performance can see this answer in a nice actual Answer. Thanks.
P.S.5 I do not know why, I guess there is a bug in my final code, that makes Chrome perform relatively poorly, which is probably easily fixable.