16

Enumerating the keys of javascript objects replays the keys in the order of insertion:

> for (key in {'z':1,'a':1,'b'}) { console.log(key); }
z
a
b

This is not part of the standard, but is widely implemented (as discussed here):

ECMA-262 does not specify enumeration order. The de facto standard is to match insertion order, which V8 also does, but with one exception:

V8 gives no guarantees on the enumeration order for array indices (i.e., a property name that can be parsed as a 32-bit unsigned integer).

Is it acceptable practice to rely on this behavior when constructing Node.js libraries?

Aneil Mallavarapu
  • 3,485
  • 5
  • 37
  • 41
  • 2
    i typically try to avoid relying on any particular behavior when it comes to JS. – zzzzBov Feb 07 '12 at 16:13
  • 1
    No, it is not. What's your use case? – georg Feb 07 '12 at 16:24
  • I've seen this style in some node libraries, and wanted to check with the community before I bugged the developers. The (brand new) node.js Dynamo, library, for example takes a schema argument of two keys where the order is important. The first specifies the "hash", and the second specifies the "range". – Aneil Mallavarapu Feb 07 '12 at 16:28
  • Here https://github.com/jed/dynamo/blob/master/lib/Table.js#L30? Honestly, looks a bit lazy to me. They pass an object and treat it like an array... no idea why. – georg Feb 07 '12 at 17:01
  • 4
    Even MongoDB is guilty of this... http://www.mongodb.org/display/DOCS/Geospatial+Indexing#GeospatialIndexing-NewSphericalModel "the use of order-preserving dictionaries is required for consistent results" – btown May 01 '12 at 00:56
  • 1
    @btown - thanks for mentioning that. I was so confused by the mongodb docs (ie. for sort()), since I assumed that dict order was undefined. Seems a bit messy to rely on. – rocketmonkeys Mar 17 '15 at 20:33

4 Answers4

13

Absolutely not! It's not a matter of style so much as a matter of correctness.

If you depend on this "de facto" standard your code might fail on an ECMA-262 5th Ed. compliant interpreter because that spec does not specify the enumeration order. Moreover, the V8 engine might change its behavior in the future, say in the interest of performance, e.g.

maerics
  • 151,642
  • 46
  • 269
  • 291
7

Definitely do not rely on the order of the keys. If the standard doesn't specify an order, then implementations are free to do as they please. Hash tables often underlie objects like these, and you have no way of knowing when one might be used. Javascript has many implementations, and they are all competing to be the fastest. Key order will vary between implementations, if not now, then in the future.

Ned Batchelder
  • 364,293
  • 75
  • 561
  • 662
  • 3
    "*if not now, then in the future.*" - actually that's unlikely, as engines are concerned with backwards-compatibiity. Everything that's used in a web browser will refrain from breaking the web, and there are already enough people who mistakenly rely on property order. Implementations will converge rather than get more diverse, unless there is some freaking performance gain. – Bergi Dec 15 '15 at 06:54
1

No. Rely on the ECMAScript standard, or you'll have to argue with the developers about whether a "de facto standard" exists like the people on that bug.

Matthew Flaschen
  • 278,309
  • 50
  • 514
  • 539
0

It's not advised to rely on it naively.

You should also do your best to stick to the spec/standard.

However there are often cases where the spec or standard limits what you can do. I'm not sure in programming I've encountered many implementations that deviate or extend the specification often for reasons such as the specification doesn't cater to everything.

Sometime people using specifics of an implementation might have test cases for that, though it's hard to make a reliable test case for beys being in order. It most succeed by accident or rather it's difficult behavior to reliably produce.

If you do rely on an implementation specific then you must document that. If your project requires portability (code to run on other people's setups out of your control and you want maximum compatibility) then in this case it's not a good idea to rely on an implementation specific such as key order.

Where you do have full control of the implementation being used then it's entirely up to you which implementation specifics you use while keeping in mind you may be forced to cater to portability due to the common need or desire to upgrade implementation.

The best form of documentation for cases like this is inline, in the code itself, often with the intention of at least making it easy to identify areas to be changed should you switch from an implementation guaranteeing order to one not doing so.

You can make up the format you like but it can be something like...

/** @portability: insertion_ordered_keys */
for(let key in object) console.log();

You might even wrap such cases up in code:

forEachKeyInOrderOfInsertion(object, console.log)

Again, likely something less overly verbose but enough to identify cases dependent on that.

For where your implementation guarantees key order you're just trans late that to the same as the original for.

You can use a JS function for that with platform detection, templating like CPP, transpiling, etc. You might also want to wrap the object creation and to be very careful about things crossing boundaries. If something loses order before reaching you (like JSON decode of input from a client over the network) then you'll likely not have a solution to that solely withing your library, this can even be just if someone else is calling your library.

Though you'll likely not need those, just make cases where you do something that might break later as a minimum and document that potential exists.

An obvious exception to that is if the implementation guarantees consistency. In that case you will probably be wasting your time decorating everything if it's not really a variability and is already documented via the implementation. The implementation often is a spec or has its own, you can choose to stick to that rather than a more generalised spec.

Ultimately in each case you'll need to make a judgement call, you may also choose to take a chance. As long as you're fully aware of the potential problems including the potential of wasting time avoiding problems you wont necessarily actually have, that is you know all the stakes and have considered your circumstances, then it's up to you what to do. There's no "should" or "shouldn't", it's case specific.

If you're making node.js public libraries or libraries to be widely distributed beyond the scope of your control then I'd say it's not good to rely on implementation specifics. Instead at least have a disclaimer with the release notes that the library is only catering to your stack and that if people want to use it for others then can fix and put in a pull request. Otherwise if not documented, it should be fixed.

jgmjgm
  • 4,240
  • 1
  • 25
  • 18