There is only one way to be sure. Benchmark it using jmh
.
(My gut feeling is that the JIT compiler will be able to produce native code for for the two versions that is identical in performance. It should not be that hard to do that ...)
But there is a bigger lesson here. In a real world program, a (hypothetical!!) small difference in performance between these two constructs is unlikely to make a significant difference to the overall performance of the application as a whole. Indeed, if you are not careful, you are liable to waste lots of your time optimizing things that don't matter. Unless you have studied the JIT compiler and the optimizations it performs ... in depth ... your intuition about what is going to matter is likely to be flawed.
Furthermore, you are liable to find that the "micro" performance depends on details that are typically outside of your control:
- JRE minor versions
- differences in behavior of different chipsets due to pipeline differences, different L1 & L2 cache sizes, etcetera
- different physical memory and heap sizes
Your micro-optimizations for one JRE, chipset, etc may actually make things worse on a newer JRE, different chipset, etcetera/
The best strategy is leave the "micro" optimization to the JIT compiler.
And if you need to hand optimize:
- Create a realistic benchmark for the entire application (or library).
- Set some quantifiable performance goals.
- Get the software to work.
- Benchmark it. If it meets the goals, STOP!
- Profile it to find the hotspots.
- Pick a fresh hotspot to optimize. If there are none left where the percentage time spent is large enough to make a difference, STOP!
- Hand optimize the hotspot.
- Benchmark / profile again. Did that improve things? If no, BACK OUT the optimization.
- Repeat.