2

Let's consider following loop in C++, where A is vector or other container using .size():

for(int n=0; n < A.size(); ++n)
    cout << A[n];

I think it is equivalent to the loop below (at least in this case, if it is not really absolutely equivalent, can you help me to figure out why? I cannot find a counter-example to that)

for(int n=-1; ++n < A.size(); )
    cout << A[n];

Is using first loop somehow better than second one? I see people using first loop everywhere, but never saw a second one. Why no one do this like in second example? Are there some counter-indication to not do this? In both cases, value of n is the same when we execute second line of code, also while exiting loop, we have the same value. Can anything go wrong in the second loop?

For me, second one seems even simpler.

Cœur
  • 37,241
  • 25
  • 195
  • 267
Kusavil
  • 294
  • 6
  • 15
  • 4
    The second doesn't work for unsigned values, which are what you should be comparing with `size()`. – chris Jun 09 '13 at 09:12
  • 4
    The second one is unusual. That's not good. The first one is what everybody has been doing for ages. Look at the `for (item: collection)` C++11 variant if you want something that actually is clearer. – Mat Jun 09 '13 at 09:14
  • 1
    See [here](http://stackoverflow.com/questions/131241/why-use-iterators-instead-of-array-indices) for why you should be using iterators instead of array indices. – Garee Jun 09 '13 at 09:14
  • I'm curious why you find the second example simpler. To me, the first example seems simpler: 1) the index is initialized with the first index value instead of some jury-rigged value that's used only so it can be 'fixed up' by the first iteration, 2) placing a side-effect in a test has uses, but generally I find that putting increment expressions 'inside' other expressions to be more complicated. It's no longer necessary to pack side-effects inside other expressions for compilers to generate optimal code, and 3) the first example is more idiomatic. – Michael Burr Jun 09 '13 at 09:53
  • @chris: actually, it will work.. the compilator will give you only a warning. For unsigned int n=-1 you will get UINT_MAX, and when you do ++n it's back to zero. So, in fact, it will work, but it's kinda crazy code... – bartimar Jun 09 '13 at 13:59
  • @bartimar, Good point. For some reason, I thought it would compare one of the near-max values to `size()`. – chris Jun 09 '13 at 15:30

2 Answers2

7

The first one is better because it is conventional. The second one will leave future readers scratching their heads and cursing your name.

John Zwinck
  • 239,568
  • 38
  • 324
  • 436
4
  1. Starting at minus one and then incrementing by one to get 0 in the conditional is not a great thing.
  2. I doubt very much the code generated will be different (aside from the loading of zero to a register is possibly more optimal than -1, which may need a full 32 bit value, where zero usually has a short form or can be achieved with "subtract register from itself" or "xor register with itself".
  3. Making the code harder to read is no benefit. If the compiler deems that this sort of solution is better for some reason, then let it mess about with the code. It's even possible that you are MISSING some optimisation tricks because you are using an "unusual" pattern.

If you want to remove the third part of the for-loop, may I suggest a more typical approach:

for(int n=0; n < A.size();)
    cout << A[n++];

(Note that for 'standard' types such as int, n++ and ++n should be equivalent in any modern compiler)

Mats Petersson
  • 126,704
  • 14
  • 140
  • 227