Generated bytecodes for the two pieces of code are different on my machine (as are most likely on yours), but that doesn't matter. First, machine code actually run by JVM after optimizations might be the same; second, in practice you won't notice any difference. Try benchmarking it yourself (this PDF presentation explains why you shouldn't), say, in a loop, and you'll see that you get all sorts of results that can differ n-fold between runs and/or after minor changes, and that happens in a surrounding that you made as predictable as you could. Imagine what happens in the real code where any of these lines is by no chance a substantial CPU-spender.
Running a proper JMH microbenchmark for this code says that it runs in ~3-4 nanoseconds in both variants (together with loading the initial value from a field and returning result from the method, or else the whole method might be thrown away). I'm a bit skeptical that some framework can measure time with such accuracy on my desktop, but what I do think is that either that is true or actual timings of these two pieces of code are drowned in other costs (calling a method? loading a value? returning a value?).
Benchmark Mode Cnt Score Error Units
MyBenchmark.f1 thrpt 30 281747,576 ± 9748,881 ops/ms
MyBenchmark.f2 thrpt 30 289411,317 ± 8951,254 ops/ms
For the sake of completeness, the benchmark I used is
import org.openjdk.jmh.annotations.*;
import java.util.concurrent.*;
@State(Scope.Thread)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
public class MyBenchmark {
public int a = 0xDEADBEAF;
@Benchmark
public int f1() {
int x = a;
x += 2;
return x;
}
@Benchmark
public int f2() {
int x = a;
x = x + 2;
return x;
}
}
Run with 10 warmup iterations, 10 measuring iterations, 3 forks.