Possible Duplicate:
Which of these pieces of code is faster in Java?
If i write a loop as
for (int i=n; i>=0; i--)
And other one as
for (int i=0; i<=n; i++)
In java which one would be faster and why?..Say n=10000
Possible Duplicate:
Which of these pieces of code is faster in Java?
If i write a loop as
for (int i=n; i>=0; i--)
And other one as
for (int i=0; i<=n; i++)
In java which one would be faster and why?..Say n=10000
Never wonder; use Google Caliper to find out. Since there has been quite a bit of discussion around the relative weights of testing against zero vs. upper limit and incrementing vs. decrementing, here's the Cartesian product of all these cases:
import java.util.Random;
import com.google.caliper.Runner;
import com.google.caliper.SimpleBenchmark;
public class Performance extends SimpleBenchmark {
static final Random rnd = new Random();
public int timeDecrementToZero(int reps) {
int sum = rnd.nextInt();
for (int i = 0; i < reps; i++) {
for (int j = Integer.MAX_VALUE; j >= 0; j--) sum += j;
}
return sum;
}
public int timeDecrementFromZero(int reps) {
int sum = rnd.nextInt();
for (int i = 0; i < reps; i++) {
for (int j = 0; j > Integer.MIN_VALUE; j--) sum += j;
}
return sum;
}
public int timeIncrementFromZero(int reps) {
int sum = rnd.nextInt();
for (int i = 0; i < reps; i++) {
for (int j = 0; j < Integer.MAX_VALUE; j++) sum += j;
}
return sum;
}
public int timeIncrementToZero(int reps) {
int sum = rnd.nextInt();
for (int i = 0; i < reps; i++) {
for (int j = Integer.MIN_VALUE; j < 0; j++) sum += j;
}
return sum;
}
public static void main(String... args) {
Runner.main(Performance.class, args);
}
}
Results:
0% Scenario{vm=java, trial=0, benchmark=DecrementToZero} 984060500.00 ns; σ=30872487.22 ns @ 10 trials
25% Scenario{vm=java, trial=0, benchmark=DecrementFromZero} 982646000.00 ns; σ=35524893.00 ns @ 10 trials
50% Scenario{vm=java, trial=0, benchmark=IncrementFromZero} 1023745500.00 ns; σ=24828496.82 ns @ 10 trials
75% Scenario{vm=java, trial=0, benchmark=IncrementToZero} 1081112500.00 ns; σ=20160821.13 ns @ 10 trials
benchmark ms linear runtime
DecrementToZero 984 ===========================
DecrementFromZero 983 ===========================
IncrementFromZero 1024 ============================
IncrementToZero 1081 ==============================
Apparently, whether the limit is zero or not has less effect than using inc vs. dec.
To point out just how tenouous these differences are, here's virtually the same code, but now it uses long
s (I include one method from the first example, to maintain scale):
public int timeDecrementFromZeroInt(int reps) {
int sum = rnd.nextInt();
for (int i = 0; i < reps; i++) {
for (int j = 0; j > Integer.MIN_VALUE; j--) sum += j;
}
return sum;
}
public long timeDecrementFromZero(int reps) {
long sum = rnd.nextLong();
for (long i = 0; i < reps; i++) {
for (long j = 0; j > Integer.MIN_VALUE; j--) sum += j;
}
return sum;
}
public long timeIncrementFromZero(int reps) {
long sum = rnd.nextLong();
for (long i = 0; i < reps; i++) {
for (long j = 0; j < Integer.MAX_VALUE; j++) sum += j;
}
return sum;
}
public long timeDecrementToZero(int reps) {
long sum = rnd.nextLong();
for (long i = 0; i < reps; i++) {
for (long j = Integer.MAX_VALUE; j >= 0; j--) sum += j;
}
return sum;
}
public long timeIncrementToZero(int reps) {
long sum = rnd.nextLong();
for (long i = 0; i < reps; i++) {
for (long j = Integer.MIN_VALUE; j < 0; j++) sum += j;
}
return sum;
}
Results:
0% Scenario{vm=java, trial=0, benchmark=DecrementFromZeroInt} 978513000.00 ns; σ=14861284.82 ns @ 10 trials
20% Scenario{vm=java, trial=0, benchmark=DecrementFromZero} 2160652000.00 ns; σ=13825686.87 ns @ 3 trials
40% Scenario{vm=java, trial=0, benchmark=IncrementFromZero} 2153370000.00 ns; σ=6318160.49 ns @ 3 trials
60% Scenario{vm=java, trial=0, benchmark=DecrementToZero} 4379893000.00 ns; σ=8739917.79 ns @ 3 trials
80% Scenario{vm=java, trial=0, benchmark=IncrementToZero} 4383569000.00 ns; σ=5798095.89 ns @ 3 trials
benchmark ms linear runtime
DecrementFromZeroInt 979 ======
DecrementFromZero 2161 ==============
IncrementFromZero 2153 ==============
DecrementToZero 4380 =============================
IncrementToZero 4384 ==============================
Main conclusion: never assume anything about performance at such a low level. Write your full code and test it as a whole because there will always be something else you are not taking into account, which completely turns the tables.
it's possible that the CPU has a faster method of comparing a number (i) against 0 vs. comparing against another arbitrary number (n). This would theoretically make the decrement version faster.
This is purely academic though, IMHO. They're both fundamentally "the same", so you should implement the one which is more logical and understandable to whomever maintains your code after you.
Just write your loops the way it makes the most sense to write them. It's unlikely you're (a) doing anything so time-critical that a few extra nanoseconds for the entire duration of your program will make a difference and (b) your code is so optimized that the bottleneck is the increment or decrement operations in a loop.
If, after you test, profiling shows a particular loop to be a problem, then worry about optimizing that loop, focusing on the loop body instead of things like increment and decrement.
The answer is that it depends on what n
is. When counting down, the code only needs to access n
once. When counting up, this may not be true. So, for instance, if n
is a volatile
field, or if something in the loop body may change the value of n
, the value needs to be looked up each time through the loop. This will slow the loop down significantly.
With this code, counting up is several hundred times slower than counting down:
public class Counts {
private static final int ITERS = 100000;
volatile int n = 1000;
public long countUp() {
long start = System.nanoTime();
for (int iter = 0; iter < ITERS; ++iter) {
for (int i = 0; i < n; ++i) {
// do nothing
}
}
return System.nanoTime() - start;
}
public long countDown() {
long start = System.nanoTime();
for (int iter = 0; iter < ITERS; ++iter) {
for (int i = n - 1; i >= 0; --i) {
// do nothing
}
}
return System.nanoTime() - start;
}
}
If there is any measurable diffetence, than the Variant by comparing to 0 is faster because on CPU Level a compare to 0 is faster. However in most cases you better stay with a Good readable code
The difference in time is insignificant in most cases compared to the amount of time necessary for someone else to understand what's going on. Just use whatever's easiest to follow. If you wanted to test it you could run something like this:
long startTime = System.nanoTime();
long duration, endTime;
for (int i=0;i<1000 ;i++ ) {
//function
}
endTime = System.nanoTime();
duration = endTime - startTime;
System.out.printf("The duration was %d.%03d microseconds%n", duration / 1000, duration % 1000);
for both increment and decrement.
The bigger issue is post- verses pre- increment/decrement. There's a pretty good explanation starting on the bottom of page 2 here: http://www.iar.com/Global/Resources/Developers_Toolbox/C_Cplusplus_Programming/Writing%20optimizer-friendly%20code.pdf