5

So someone asked Is the ++ operator more efficient than a=a+1? a little while ago. I thought that I analyzed this before and initially said that there was no difference between a = a + 1 and the incremental operator ++. As it turns out, a++, ++a and a += 1 all compile to the same bytecode, but a = a + 1 does not, as can be seen below:

public class SO_Test
{
    public static void main(String[] args)
    {
        int a = 1;
        a++;
        a += 1;
        ++a;    
    }
}

Output: enter image description here

Example:

public class SO_Test
{
    public static void main(String[] args)
    {
        int a = 1;
        a = a + 1;
        a++;
        a += 1;
        ++a;    
    }
}

Output: enter image description here

In short, a = a + 1 issues iload_1, iconst_1, iadd and istore_1, whereas the others only use iinc.

I've tried to rationalize this, but I am unable to. Is the compiler not smart enough to optimize the bytecode in this case? Is there a good reason that these are different? Is this handled by JIT? Unless I'm interpreting this incorrectly, it seems like I should never use a = a + 1, which I thought for sure was just a stylistic choice.

Community
  • 1
  • 1
Steve P.
  • 14,489
  • 8
  • 42
  • 72
  • 1
    I think there are two aspects here 1) Compile time optimization 2) Run time optimization. The one you observed now is related to compile time optimization. I think we need to decide on should/shouldn't use after runtime optimization. – kosa Nov 13 '13 at 16:04
  • This really isn't a comprehensive analysis of the bytecode differences; all you've shown is that the bytecode is the same for standalone expression statements. What happens when you actually _consume the result_ of `a++` or `++a`? Having written a decompiler that has to identify various pre/post-increment patterns, I remember there being more subtle differences. And then, of course, the result for instance fields, static fields, and array elements tend to be different too. – Mike Strobel Nov 14 '13 at 17:13
  • @MikeStrobel Yeah, I definitely should **not** have said detailed analysis--quite the opposite, actually. At the moment, I have too little time to do anything in detail, unfortunately. Perhaps over break... – Steve P. Nov 14 '13 at 18:28
  • Take a look at my [Procyon](https://bitbucket.org/mstrobel/procyon/) decompiler; it can probably speed up your experiments. Write some tests where you use each of the increment styles in a different context and compare the decompiled results. If the decompiler picks the same operator for all three contexts, the bytecode was probably identical. If it reconstructs the original operators, there were probably differences. Isolate those cases and then analyze the bytecode (run Procyon with `-r` for raw bytecode). – Mike Strobel Nov 14 '13 at 19:11

2 Answers2

7

The prevailing philosophy is that javac deliberately chooses not to optimize generated code, relying on the JIT compiler to do that at runtime. The latter has far better information about the execution environment (hardware architecture etc) as well as how the code is being used at runtime.

These days, I don't think you can draw any realistic conclusions about performance from just reading the bytecodes. Arguments about premature optimization aside, if you really want to know if there's a difference, construct a micro-benchmark and see for yourself.

Community
  • 1
  • 1
NPE
  • 486,780
  • 108
  • 951
  • 1,012
1

It’s worth noting that this is compiler specific. I found at least one eclipse version compiling x=x+1 the same way as x++. Further, this is relevant to local variables only, as there is no similar byte code instruction for fields. And it works for variables of type int only. So the byte code impact is rather limited. It’s most likely there to improve the common for(int i=start; i<limit; i++) pattern. On the language side it makes a difference especially for a[b()] ++ vs. a[b()] = a[b()] + 1, etc.

Holger
  • 285,553
  • 42
  • 434
  • 765