I would expect the second one to be faster as it does less comparisons. However the difference is so small, the timing results will be very dependant on how you benchmark.
I would go with what you believe is clearest and simplest and the JVM is likely to optimise this best.
EDIT: The problem with micro-benchmarking is that how you test can impact the results.
private static boolean isFinite1(float x) {
return Float.NEGATIVE_INFINITY < x && x < Float.POSITIVE_INFINITY;
}
private static boolean isFinite2(float x) {
return !(x != x || x == Float.POSITIVE_INFINITY || x == Float.NEGATIVE_INFINITY);
}
public static void main(String[] args) {
int nums = 10000;
int runs = 10000;
float[] floats = new float[nums];
for (int i = 0; i < nums; i++) {
double d = Math.random();
floats[i] = d < 0.01 ? Float.NaN :
d < 0.02 ? Float.NEGATIVE_INFINITY :
d < 0.03 ? Float.POSITIVE_INFINITY : (float) d;
}
for (int n = 0; n < 10; n++) {
{
int count1 = 0, count2 = 0;
long timeA = System.nanoTime();
for (int i = 0; i < runs; i++)
for (float f : floats)
if (isFinite1(f)) count1++;
long timeB = System.nanoTime();
for (int i = 0; i < runs; i++)
for (float f : floats)
if (isFinite2(f)) count2++;
long timeC = System.nanoTime();
long total1 = timeB - timeA;
long total2 = timeC - timeB;
assert count1 == count2;
System.out.printf("1,2: isFinite1 took %.1f ns and isFinite2 took %.1f ns on average%n", (double) total1 / runs / nums, (double) total2 / runs / nums);
}
{
int count1 = 0, count2 = 0;
long timeA = System.nanoTime();
for (int i = 0; i < runs; i++)
for (float f : floats)
if (isFinite2(f)) count1++;
long timeB = System.nanoTime();
for (int i = 0; i < runs; i++)
for (float f : floats)
if (isFinite1(f)) count2++;
long timeC = System.nanoTime();
long total1 = timeB - timeA;
long total2 = timeC - timeB;
assert count1 == count2;
System.out.printf("2,1: isFinite1 took %.1f ns and isFinite2 took %.1f ns on average%n", (double) total1 / runs / nums, (double) total2 / runs / nums);
}
}
}
prints
1,2: isFinite1 took 1.5 ns and isFinite2 took 5.1 ns on average
2,1: isFinite1 took 3.6 ns and isFinite2 took 4.4 ns on average
1,2: isFinite1 took 1.5 ns and isFinite2 took 5.1 ns on average
2,1: isFinite1 took 3.6 ns and isFinite2 took 4.4 ns on average
1,2: isFinite1 took 1.5 ns and isFinite2 took 5.2 ns on average
2,1: isFinite1 took 3.6 ns and isFinite2 took 4.4 ns on average
As you can see, even the order I test these makes a big difference.
Far more important than the operations involved is to optimise the number of branches and how well branch prediction will work. http://www.agner.org/optimize/microarchitecture.pdf
Say I make the different values 25x more likely so each range is equally likely.
floats[i] = d < 0.25 ? Float.NaN :
d < 0.5 ? Float.NEGATIVE_INFINITY :
d < 0.75 ? Float.POSITIVE_INFINITY : (float) d;
All this does is increase the chance of the code going through a different path.
1,2: isFinite1 took 8.5 ns and isFinite2 took 14.2 ns on average
2,1: isFinite1 took 10.9 ns and isFinite2 took 11.5 ns on average
1,2: isFinite1 took 7.2 ns and isFinite2 took 14.4 ns on average
2,1: isFinite1 took 11.0 ns and isFinite2 took 11.5 ns on average
1,2: isFinite1 took 7.3 ns and isFinite2 took 14.2 ns on average
2,1: isFinite1 took 10.8 ns and isFinite2 took 11.5 ns on average
I repeat, clearer code should be your goal! ;)