Microbenchmarks are notoriously difficult to get right, especially in "intelligent" languages such as Java, where the compiler and Hotspot can do lots of optimisations. You almost certainly aren't testing what you think you're testing. Have a read of Anatomy of a Flawed Microbenchmark for more details and examples (it's a fairly old article now, but the principles are as valid as ever).
In this particular case, I can see at least three problems right off the bat:
- The code won't be performing any addition at all, because the compiler will assign the variables their compile-time constant values. (i.e. it's as if your code read
int a = 20;
and int b = 120;
)
- The granularity of
nanoTime
is quite high on most systems. That, combined with load from the OS, is going to mean your experimental error in measurement is much greater than the magnitude of the result itself.
- Even if addition were occurring, you haven't "warmed up" the VM; typically whichever operation you put second would appear faster for this reason.
There are probably more potential hazards lurking as well.
The moral of the story is test your code in real-world conditions, to see how it behaves. It is in no way accurate to test small pieces of code in isolation and assume that the overall performance will be the sum of these pieces.