As far as the language definition is concerned, JavaScript numbers are 64-bit floating-point.
(Except for bitwise operations, which use 32-bit integers. I suppose the latter is mandated even on a 64-bit CPU, e.g. 1 << 33
has to be 2 even if the CPU could do better, for backwards compatibility.)
However, if a compiler can prove a number is used only as an integer, it may prefer to implement it as such for efficiency, e.g.
for (var i = 0; i < Math.pow(2, 40); i++)
console.log(i)
Clearly it is desirable to implement this with integers, in which case 64-bit integers must be used for correctness.
Now consider this case:
for (var i = 0; i < Math.pow(2, 60); i++)
console.log(i)
If implemented with floating-point numbers, the above will fail, as floating-point cannot accurately represent integers larger than fifty-three bits.
If implemented with 64-bit integers, it works fine (well, apart from the inconveniently long run time).
Is a JavaScript compiler allowed (both by the letter of the standard and by compatibility with actual existing code) to use 64-bit integers in such cases where they provide different but better results than floating point?
Similarly, if a JavaScript compiler provides arrays with more than four billion elements, is it allowed to implement array lengths and indexes as 64-bit integers?