1

I have this code:

public static void main(String[] args) {

    long f = System.nanoTime();

    int a = 10 + 10;

    long s =System.nanoTime();

    System.out.println(s - f);

    long g = System.nanoTime();

    int b = 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10;

    long h =System.nanoTime();

    System.out.println(h - g);

}

With this output/s:
Test 1:

427
300

Test 2:

533
300

Test 3:

431
398

Based on my test scenarios, why does the line int b = 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10; executes faster than int a = 10 + 10;?

Michael 'Maik' Ardan
  • 4,213
  • 9
  • 37
  • 60
  • Concluding this in _one_ run inside a main method is far too uncertain. At least run it a couple of million times within a loop. – NilsH Jun 03 '13 at 10:02
  • Micheal Please check this below URL. Due pool behavior for Integer types in java causes the later one to run faster as by the time integer b is execute 10 is already in pool. Hope this helps. http://stackoverflow.com/questions/13098143/java-integer-constant-pool – Harish Kumar Jun 03 '13 at 10:04
  • I get a different result each time I run this code, from 0 for both of them to 600, sometimes one of them is 0. These operations are too quick for you to test it one time. – Djon Jun 03 '13 at 10:05
  • 1
    Check out the bytecode (use `javap -c`) if your code wasn't optimized at compile time. Besides that it seems to be a very good candidate for runtime optimization as variables `a` and `b` are not used. **All the code tests is System.nanoTime() call.** – Grzegorz Żur Jun 03 '13 at 10:05
  • 4
    This isn't really a good test. The values of `a` and `b` are constants calculated at compile time, so nothing is actually happening at rumtime. – Zutty Jun 03 '13 at 10:05
  • Hi @NilsH, I'll edit the post after trying your suggestion. Thank you! – Michael 'Maik' Ardan Jun 03 '13 at 10:05
  • @HarishKumar This has nothing to do with integer caching, which applies to `Integer`, not `int`... – assylias Jun 03 '13 at 10:10
  • The resolution of System.nanotime is way too high to measure such an operation (which runs in a few nanoseconds at most). The call to System.nanotime itself probably takes more time than what you are trying to measure. – assylias Jun 03 '13 at 10:11

3 Answers3

5

Microbenchmarks are notoriously difficult to get right, especially in "intelligent" languages such as Java, where the compiler and Hotspot can do lots of optimisations. You almost certainly aren't testing what you think you're testing. Have a read of Anatomy of a Flawed Microbenchmark for more details and examples (it's a fairly old article now, but the principles are as valid as ever).

In this particular case, I can see at least three problems right off the bat:

  • The code won't be performing any addition at all, because the compiler will assign the variables their compile-time constant values. (i.e. it's as if your code read int a = 20; and int b = 120;)
  • The granularity of nanoTime is quite high on most systems. That, combined with load from the OS, is going to mean your experimental error in measurement is much greater than the magnitude of the result itself.
  • Even if addition were occurring, you haven't "warmed up" the VM; typically whichever operation you put second would appear faster for this reason.

There are probably more potential hazards lurking as well.

The moral of the story is test your code in real-world conditions, to see how it behaves. It is in no way accurate to test small pieces of code in isolation and assume that the overall performance will be the sum of these pieces.

Andrzej Doyle
  • 102,507
  • 33
  • 189
  • 228
4

First of all. Java compiler perform optimization of constant expression, so your code on compile time will be converted to:

int b = 120;

As result JVM performs assign to a=20 and b=120 near the same time.

The second. You perform short measurement of big system (I mean entire computer that includes OS, swap processes, another run processes ...). SO you get snapshot of random system at very small time-period. That is why you cannot deduce true or false that a assignment is faster than b. To proof this you have to place code measurement into rather big loop - do the same approximately 1,000,000 times. Such big repeating allows you smooth the expectation (in mathematical sense of this word)

Dewfy
  • 23,277
  • 13
  • 73
  • 121
2

This is not the correct way to measure performance.

First of all , do not measure such small piece of code. instead , call it millions of times like suggested by @NilsH, and get the avarage time by dividing the elapsed time in the number of calls.

Second, The JVM will likely perform optimizations on your code, so you need to give it a "warm up" time. Make a few millions run "on dry" without measuring the time at all, than begin your measurments.

omer schleifer
  • 3,897
  • 5
  • 31
  • 42