13

I've read that many developers use x += 1 instead of x++ for clarity. I understand that x++ can be ambiguous for new developers and that x += 1 is always more clear, but is there any difference in efficiency between the two?

Example using for loop:

for(x = 0; x < 1000; x += 1) vs for(x = 0; x < 1000; x++)

I understand that it's usually not that big of a deal, but if I'm repeatedly calling a function that does this sort of loop, it could add up in the long run.

Another example:

while(x < 1000) {
    someArray[x];
    x += 1;
}

vs

while(x < 1000) {
    someArray[x++];
}

Can x++ be replaced with x += 1 without any performance loss? I'm especially concerned about the second example, because I'm using two lines instead of one.

What about incrementing an item in an array? Will someArray[i]++ be faster than doing someArray[i] += 1 when done in a large loop?

beatgammit
  • 19,817
  • 19
  • 86
  • 129
  • 7
    **You can find that out by running a benchmark!** Depending on whatever language you are using, the compiler will most likely generate the same code for both statements. *Premature optimization is the root of all evil.* – Felix Kling Jun 28 '11 at 16:20
  • 2
    For posterity sake, I much prefer `x++` to `x += 1` – Jacob Eggers Jun 28 '11 at 16:22
  • Your later example has two different statements I believe. In the first one you access index x of array someArray and then increase x by 1. In the second you access index x+1 of array someArray. – yarian Jun 28 '11 at 16:22
  • 2
    @YGomez: No, `x++` first returns `x` and then increases it. – Felix Kling Jun 28 '11 at 16:23
  • Yeah, but I'd have to test on every language. I was wondering if there was some rule of thumb I could follow. If `x += 1` is more clear, why do people use `x++` and `++x`? – beatgammit Jun 28 '11 at 16:23
  • `x++` means less typing. – SLaks Jun 28 '11 at 16:25
  • YGomez: No, both examples access `someArray` at `x`, then increase `x`. The `x++` is a *post decrement* which returns the original value and then increases the value in the variable. – DarkDust Jun 28 '11 at 16:26
  • @DarkDust: ITYM *post increment* – Paul R Jun 28 '11 at 16:27
  • Performance is not measured in lines (or nor in characters). Also, as for actual performance, the answer depends not only on the language used, by specifically on the implementation and, in some cases (most compilers, I'd guess), on its options. –  Jun 28 '11 at 16:27
  • @Paul R: D'oh, of course I meant increment. – DarkDust Jun 28 '11 at 18:32
  • Are you suggesting that the ternary operator should be completely prohibited as it would scare the children? – ruslik Jun 29 '11 at 07:37
  • @ruslik Ternary makes sense in a limited number of cases, but it's easy to get confusing. I once saw a ternary used like so: `x ? y ? z : a ? b : c;` There's a point to stop and just use a nested if. Incrementing, on the other hand, is only a 3 character difference, and if there's no performance difference, then I'd use whatever's cleaner. – beatgammit Jun 29 '11 at 18:10
  • This is very language dependent, when discussing Java - it doesn't mean anything, but if you deal with Javascript: http://stackoverflow.com/questions/971312/why-avoid-increment-and-decrement-operators-in-javascript and http://www.youtube.com/watch?v=taaEzHI9xyY&t=50m42s – Nir Alfasi Apr 23 '13 at 19:25
  • `I've read that many developers use x += 1 instead of x++ for clarity.` - {{citation needed}}. It's exactly the same type of "clarity" as having `if ( boolean == true )` conditionals or `int i = 0; while( i < max ) { /* body */ i++; }` loops, or using a named temporary variable for *every* value calculated. –  Jul 18 '17 at 17:35
  • @tjameson actually, a properly formatted nested ternary is *absolutely* readable, if you maintain proper indentation, use parentheses and split it into lines as necessary. Your example is broken, because you have unmatched `?` and `:` pairs. Properly written, it's just a simple `x ? ( y ? a : b ) : ( z ? c : d );` ... and that's hardly less readable or clear than a double `if`. –  Jul 18 '17 at 17:39

5 Answers5

23

Any sane or insane compiler will produce identical machine code for both.

SLaks
  • 868,454
  • 176
  • 1,908
  • 1,964
  • 1
    What about for interpreted languages? – beatgammit Jun 28 '11 at 16:21
  • 18
    If you're using an interpreted language, you have bigger performance issues. – SLaks Jun 28 '11 at 16:24
  • I think this answers the question at hand then. I thought this might be the case, but I just wanted to make sure. – beatgammit Jun 28 '11 at 16:26
  • @tjameson: And apart from that, JIT-compiling interpreters like PyPy and V8 are expected to take care of optimizations like these, they're really low-handing fruit (the PyPy guys optimize out *heap allocations*, this is still wow-ing me after several months). –  Jun 28 '11 at 16:29
  • @delnan - Huh, that's interesting. Does that mean that they lie to the developer? I would assume that `new` puts data on the stack instead of the heap, but I guess it really doesn't matter since addresses aren't even accessible. – beatgammit Jun 28 '11 at 16:33
  • @tjameson: There isn't even a `new` in Python - memory comes from... well, we don't even think about where memory comes from, much less where it actually is. That said, it only works when no reference escapes from the piece of code that's JITted, because generally the interpreter does work with pointers. See e.g. http://morepypy.blogspot.com/2010/09/escape-analysis-in-pypys-jit.html and http://morepypy.blogspot.com/2010/09/using-escape-analysis-across-loop.html for detailed explanations. –  Jun 28 '11 at 16:40
5

Assuming you talk about applying these to base types and no own classes where they could make a huge difference they can produce the same output especially when optimization is turned on. To my surprise I often found in decompiled applications that x += 1 is used over x++ on assembler level(add vs inc).

4

Any decent compiler should be able to recognize that the two are the same so in the end there should be no performance difference between them.

If you want to convince yourself just do a benchmark..

Mike Dinescu
  • 54,171
  • 16
  • 118
  • 151
0

Consider you're a lazy compiler implementer and wouldn't bother writing OPTIMIZATION routines in the machine-code-gen module.

x = x + 1;

would get translated to THIS code:

mov $[x],$ACC
iadd $1,$ACC
mov $ACC,$[x]

And x++ would get translated to:

incr $[x] ;increment by 1

if ONE instruction is executed in 1 machine cycle, then x = x + 1 would take 3 machine cycles where as x++ would take 1 machine cycle. (Hypothetical machine used here).

BUT luckily, most compiler implementers are NOT lazy and will write optimizations in the machine-code-gen module. So x = x+1 and x++ SHOULD take equal time to execute. :-P

Aniket Inge
  • 25,375
  • 5
  • 50
  • 78
  • Most compiler writers are, in fact, so lazy, that they never implement an increment instruction in their intermediate form. Instead, they have an addition instruction and leave this kind of optimization to the back end. – razeh Apr 23 '13 at 19:28
0

When you say "it could add up in the long run" - don't think about it that way.

Rather, think in terms of percentages. When you find the program counter is in that exact code 10% or more of the time, then worry about it. The reason is, if the percent is small, then the most you could conceivably save by improving it is also small.

If the percent of time is less than 10%, you almost certainly have much bigger opportunities for speedup in other parts of the code, almost always in the form of function calls you could avoid.

Here's an example.

Community
  • 1
  • 1
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
  • What you say is true, Mike. However, there is value in understanding that there generally is no difference in x++ and x += 1. If there were a difference, then it would make sense to understand why to code one way or the other. Consider string concatenation vs. string builder in Java and C#. There are cases where it makes sense to use one, and cases where it makes sense to use the other. Rules of thumb can help give you better code and performance, but detailed analysis always gives the final answer. – aaaa bbbb Jun 28 '11 at 22:25
  • @aaaa: You're right of course. In all those cases where it matters, the reason it matters is that it costs a substantial percent of time - as you say "makes sense". You and I know it's just common sense, but I'm still surprised how many folks haven't learned this yet. – Mike Dunlavey Jun 28 '11 at 22:31
  • Yes, true, but what if I have a module that does batch conversions from RGB to BGR (some images use it backwards, don't ask me why). This would spend most of it's time in loops with some bitshifting. – beatgammit Jun 28 '11 at 23:00
  • @tjameson: Then here's what I do, which always works. Take several [stackshots](http://stackoverflow.com/questions/375913/what-can-i-use-to-profile-c-code-in-linux/378024#378024). Each one tells me what it's doing (and more importantly, precisely _why_) at that moment. If two or more are in that code, it has a high enough percentage to be worth optimizing. It will point directly at the instructions costing the time. That's how I know I should optimize it. Couldn't be simpler. Notice, I don't do this _until_ it tells me I should. That's the key point, because the problem could be elsewhere. – Mike Dunlavey Jun 29 '11 at 02:29