One can expect code similar to this
for(let i = 0 ; i !== N ; ++i ) break
to be of O(1)
complexity.
But in case of for(of)
or for(in)
one can never be sure. Because it strongly depends on the engine's specifics.
So, the following snippet on my Chrome 87, FF 83 and Node.js 14 gives around 0ms (0.01ms) and 150ms for small and big objects, respectively.
const create_obj = n => {
return Object.fromEntries((function *() {
for(let i = 0; i!==n ; ++i) yield [i,i]
})())
}
const check_forin_peformance = obj => {
let start_time = performance.now()
for(const key in obj) {
console.log("first iteration after", performance.now() - start_time, "ms")
break
}
}
const small_obj = create_obj(1_000)
const big_obj = create_obj(1_000_000)
console.log("small obj has", Object.keys(small_obj).length, "fields")
check_forin_peformance(small_obj)
console.log("big obj has", Object.keys(big_obj).length, "fields")
check_forin_peformance(big_obj)
Which leads me to believe that it's of O(N)
complexity. Despite the fact that there is the break
and only one iteration is performed.
In FF for(const x of Object.keys(obj))
gives much better performance. But still looks liner:
let obj = Object.fromEntries((function *(){ for(let i = 0 ; i !== 1_000_000; ++i ) yield [i,i] })())
let start_time = performance.now()
let x = Object.keys(obj).length
console.log(performance.now() - start_time);
obj = Object.fromEntries((function *(){ for(let i = 0 ; i !== 1_000_000; ++i ) yield [i,i] })())
start_time = performance.now()
for(const x of Object.keys(obj)) { console.log("first iteration after", performance.now() - start_time); break; }
obj = Object.fromEntries((function *(){ for(let i = 0 ; i !== 1_000_000; ++i ) yield [i,i] })())
start_time = performance.now()
for(const x in obj) { console.log("first iteration after", performance.now() - start_time); break; }
It's not totally clear, but this is probably due to this section of the spec. And specifically due to these lines
i. Let `keys` be ? `object`.[[OwnPropertyKeys]](). ii. For each `key` of `keys` in List order, do 1. If `Type(key)` is String, then a. Append `key` to `remaining`.
So, as there're probably some lists created before the first iteration, O(N)
is a good guess. But this is obviously sub-optimal. One can imagine an algorithm to enumerate object's keys with a help of an iterator, so that the first iteration starts after O(1)
amount of time. But this is the way it is now.
And I'm wondering: is there anything that can be done? So that if I iterate over n
keys (which is much smaller than N
) I get O(n)
complexity and not O(N)
. Maybe there is some clever trick I'm not aware about. But it should work with any object not only those I create myself.
And a reference to the place(s) in V8 sources where this all happens will also be appreciated.