14

Consider the following two snippets of code on an array of length 2:

boolean isOK(int i) {
    for (int j = 0; j < filters.length; ++j) {
        if (!filters[j].isOK(i)) {
            return false;
        }
    }
    return true;
}

and

boolean isOK(int i) {
     return filters[0].isOK(i) && filters[1].isOK(i);
}

I would assume that the performance of these two pieces should be similar after sufficient warm-up.
I've checked this using JMH micro-benchmarking framework as described e.g. here and here and observed that the second snippet is more than 10% faster.

Question: why hasn't Java optimized my first snippet using the basic loop unrolling technique?
In particular, I'd like to understand the following:

  1. I can easily produce a code that is optimal for cases of 2 filters and still can work in case of another number of filters (imagine a simple builder):
    return (filters.length) == 2 ? new FilterChain2(filters) : new FilterChain1(filters). Can JITC do the same and if not, why?
  2. Can JITC detect that 'filters.length==2' is the most frequent case and produce the code that is optimal for this case after some warm-up? This should be almost as optimal as the manually-unrolled version.
  3. Can JITC detect that a particular instance is used very frequently and then produce a code for this specific instance (for which it knows that the number of filters is always 2)?
    Update: got an answer that JITC works only on a class level. OK, got it.

Ideally, I would like to receive an answer from someone with a deep understanding of how JITC works.

Benchmark run details:

  • Tried on latest versions of Java 8 OpenJDK and Oracle HotSpot, the results are similar
  • Used Java flags: -Xmx4g -Xms4g -server -Xbatch -XX:CICompilerCount=2 (got similar results without the fancy flags as well)
  • By the way, I get similar run time ratio if I simply run it several billion times in a loop (not via JMH), i.e. the second snippet is always clearly faster

Typical benchmark output:

Benchmark (filterIndex) Mode Cnt Score Error Units
LoopUnrollingBenchmark.runBenchmark 0 avgt 400 44.202 ± 0.224 ns/op
LoopUnrollingBenchmark.runBenchmark 1 avgt 400 38.347 ± 0.063 ns/op

(The first line corresponds to the first snippet, the second line - to the second.

Complete benchmark code:

public class LoopUnrollingBenchmark {

    @State(Scope.Benchmark)
    public static class BenchmarkData {
        public Filter[] filters;
        @Param({"0", "1"})
        public int filterIndex;
        public int num;

        @Setup(Level.Invocation) //similar ratio with Level.TRIAL
        public void setUp() {
            filters = new Filter[]{new FilterChain1(), new FilterChain2()};
            num = new Random().nextInt();
        }
    }

    @Benchmark
    @Fork(warmups = 5, value = 20)
    @BenchmarkMode(Mode.AverageTime)
    @OutputTimeUnit(TimeUnit.NANOSECONDS)
    public int runBenchmark(BenchmarkData data) {
        Filter filter = data.filters[data.filterIndex];
        int sum = 0;
        int num = data.num;
        if (filter.isOK(num)) {
            ++sum;
        }
        if (filter.isOK(num + 1)) {
            ++sum;
        }
        if (filter.isOK(num - 1)) {
            ++sum;
        }
        if (filter.isOK(num * 2)) {
            ++sum;
        }
        if (filter.isOK(num * 3)) {
            ++sum;
        }
        if (filter.isOK(num * 5)) {
            ++sum;
        }
        return sum;
    }


    interface Filter {
        boolean isOK(int i);
    }

    static class Filter1 implements Filter {
        @Override
        public boolean isOK(int i) {
            return i % 3 == 1;
        }
    }

    static class Filter2 implements Filter {
        @Override
        public boolean isOK(int i) {
            return i % 7 == 3;
        }
    }

    static class FilterChain1 implements Filter {
        final Filter[] filters = createLeafFilters();

        @Override
        public boolean isOK(int i) {
            for (int j = 0; j < filters.length; ++j) {
                if (!filters[j].isOK(i)) {
                    return false;
                }
            }
            return true;
        }
    }

    static class FilterChain2 implements Filter {
        final Filter[] filters = createLeafFilters();

        @Override
        public boolean isOK(int i) {
            return filters[0].isOK(i) && filters[1].isOK(i);
        }
    }

    private static Filter[] createLeafFilters() {
        Filter[] filters = new Filter[2];
        filters[0] = new Filter1();
        filters[1] = new Filter2();
        return filters;
    }

    public static void main(String[] args) throws Exception {
        org.openjdk.jmh.Main.main(args);
    }
}
Alexander
  • 2,761
  • 1
  • 28
  • 33
  • 1
    The compiler can't guarantee that the length of the array is 2. I'm not sure it would unroll it even if it could though. – marstran Nov 22 '19 at 14:05
  • 1
    `@Setup(Level.Invocation)` : not sure it helps (see the javadoc). – GPI Nov 22 '19 at 14:07
  • 3
    Since there is no guarantee anywhere that the array is always length 2, the two methods are not doing the same thing. How could JIT then allow itself to change the first into the second? – Andreas Nov 22 '19 at 14:10
  • @Andreas I suggest you answer the question, but elaborate why JIT can't unroll in this case comparing to some another similar case where it can – Alexander Nov 22 '19 at 14:20
  • @GPI, do you mean javadoc on Invocation? What do you suggest instead? – Alexander Nov 22 '19 at 14:21
  • @marstran, see my comment to Andreas - consider answering the question – Alexander Nov 22 '19 at 14:22
  • @Andreas it is still unclear to me: why the optimizer can't detect that there is no change to the array size after creation and then use this information to unroll – Alexander Nov 22 '19 at 14:24
  • @Andreas OK, sure. The problem that I am not certain that this is the right or complete answer. The question remains unanswered: why JIT can't detect that there is no change to the array after creation and then unroll the loop – Alexander Nov 22 '19 at 14:28
  • 1
    @Alexander JIT *can* see that the array length cannot change after creation, because the field is `final`, but JIT doesn't see that **all instances** of the class will get an array of length 2. To see that, it would have to dive into the `createLeafFilters()` method and analyze the code deep enough to learn that the array will always be 2 long. Why do you believe the JIT optimizer would dive that deep into your code? – Andreas Nov 22 '19 at 14:31
  • Your bench is flawed. Using a random int each iteration means each no two iteration has the same complexity, so none is comparable. Using a Level.Invocation setup flaws any measurment under one ms (as per the documentation). You have a return value, but nothing consumes it so you have no idea how it is handled (see JMH BlackHole class). I did a new bench by adressing these issues and the result is : the methods have the same performance considering the error margin. I would **not** conclude anything at the JIT level of that. This goes to show: you have a measure, but of what exactly ? – GPI Nov 22 '19 at 15:50
  • @GPI: 1. What do you suggest instead of random? Note that the complexity should be the same for all numbers. 2. What level do you suggest to use instead of Invocation 3. JMH states that we can use either return value or BlackHole to make sure the value is consumed. If it would not be the case, the entire code would be eliminated in both cases, which is clearly not the case. – Alexander Nov 22 '19 at 16:22
  • 1 I would suggest an input representative of a workload short of which I'd use fixed values. I'd calibrate by testing a hundred or so different ones and check if the choice actually matters (in my test it does not) 2. Use whatever else instead of invocation. I used... no setup only final variables. 3. You are including an array access, 8 sums, 4 mults, 6 method invocations in a measurment that is in the 40 nanosec ball park. If you're here for `isOk()`, prune every possible instruction : my implemementation is a one liner : `return filterHolder.UNROLLED.isOK(holder.value);` – GPI Nov 22 '19 at 16:56
  • Another thought : can it be that one of your implementation is small enough to be inlined and not the other ? That could explain the difference in your test vs mine, especially because yours test 6 times the invocation. See : https://dzone.com/articles/jit-inlining and https://stackoverflow.com/questions/10073025/java-jit-method-inlining – GPI Nov 22 '19 at 23:24
  • @GPI I've tried using a constructor for the benchmark data instead of \@Setup. Indeed, the results look similar. But I believe this is because you've simplified the work of the optimizer too much, as the data is the same between the runs, so I guess the optimizer is using this. I've also tried Level.TRIAL, and the results diverge again. The results with constructor are both faster, suggesting that JIT could somehow benefit from the fact that the same data is used between the runs. – Alexander Nov 25 '19 at 15:22
  • @GPI regarding your thought about inlining: this might be an option, but: can't it first optimize my loop (and get something very similar to my own optimization) and then inline? – Alexander Nov 25 '19 at 15:24

2 Answers2

14

The loop presented likely falls under the "non counted" category of loops, which are loops for which the iteration count can neither be determined at compile time nor at run time. Not only because of @Andreas argument about the array size but also because of the randomly conditional break (that used to be in your benchmark when I wrote this post).

State-of-the-art compilers do not aggressively optimize them, since unrolling non-counted loops often involves duplicating also a loop’s exit condition, which thus only improves run-time performance if subsequent compiler optimizations can optimize the unrolled code. See this 2017 paper for details where they make proposals how to unroll such stuff too.

From this follows, that your assumption does not hold that you did sort of "manual unrolling" of the loop. You're considering it a basic loop unrolling technique to transform an iteration over an array with conditional break to an && chained boolean expression. I'd consider this a rather special case and would be surprised to find a hot-spot optimizer do a complex refactoring on the fly. Here they're discussing what it actually might do, perhaps this reference is interesting.

This would reflect closer the mechanics of a contemporary unrolling and is perhaps still nowhere near what unrolled machine code would look like:

if (! filters[0].isOK(i))
{
   return false;
} 
if(! filters[1].isOK(i))
{
   return false;
}
return true;

You're concluding, that because one piece of code runs faster than another piece of code the loop didn't unroll. Even if it did, you still could see the runtime difference due to the fact that you're comparing different implementations.

If you want to gain more certainty, there's the jitwatch analyzer/visualizer of the actual Jit operations including machine code (github) (presentation slides). If there's something to see eventually I'd trust my own eyes more than any opinion about what JIT may or may not do in general, since every case has its specifics. Here they fret about the difficulty to arrive at general statements for specific cases as far as JIT is concerned and provide some interesting links.

Since your goal is minimum runtime, the a && b && c ... form is likely the most efficient one, if you don't want to depend on hope for loop-unrolling, at least more efficient than anything else presented yet. But you can't have that in a generic way. With functional composition of java.util.Function there's huge overhead again (each Function is a class, each call is a virtual method that needs dispatch). Perhaps in such a scenario it might make sense to subvert the language level and generate custom byte code at runtime. On the other hand a && logic requires branching in byte code level as well and may be equivalent to if/return (which also can't be generified without overhead).

Curiosa Globunznik
  • 3,129
  • 1
  • 16
  • 24
  • just a small adendum: a counted loop in JVM world is any loop that "runs" over a `int i = ....; i < ...; ++i` any _other_ loop is not. – Eugene Nov 30 '19 at 19:33
11

TL;DR The main reason of performance difference here is not related to loop unrolling. It is rather the type speculation and the inline caches.

Unrolling strategies

In fact, in HotSpot terminology, such loops are treated as counted, and in certain cases JVM can unroll them. Not in your case though.

HotSpot has two loop unrolling strategies: 1) unroll maximally, i.e. remove the loop altogether; or 2) glue several consecutive iterations together.

Maximal unrolling can be done, only if the exact number of iterations is known.

  if (!cl->has_exact_trip_count()) {
    // Trip count is not exact.
    return false;
  }

In your case, however, the function may return early after the first iteration.

Partial unrolling could be probably applied, but the following condition breaks unrolling:

  // Don't unroll if the next round of unrolling would push us
  // over the expected trip count of the loop.  One is subtracted
  // from the expected trip count because the pre-loop normally
  // executes 1 iteration.
  if (UnrollLimitForProfileCheck > 0 &&
      cl->profile_trip_cnt() != COUNT_UNKNOWN &&
      future_unroll_ct        > UnrollLimitForProfileCheck &&
      (float)future_unroll_ct > cl->profile_trip_cnt() - 1.0) {
    return false;
  }

Since in your case the expected trip count is less than 2, HotSpot assumes it's not worthy to unroll even two iterations. Note that the first iteration is extracted into pre-loop anyway (loop peeling optimization), so unrolling is indeed not very benificial here.

Type speculation

In your unrolled version, there are two different invokeinterface bytecodes. These sites have two distinct type profiles. The first receiver is always Filter1, and the second receiver is always Filter2. So, you basically have two monomorphic call sites, and HotSpot can perfectly inline both calls - so called "inline cache" which has 100% hit ratio in this case.

With the loop, there is just one invokeinterface bytecode, and only one type profile is collected. HotSpot JVM sees that filters[j].isOK() is called 86% times with Filter1 receiver and 14% times with Filter2 receiver. This will be a bimorphic call. Fortunately, HotSpot can speculatively inline bimorphic calls, too. It inlines both targets with a conditional branch. However, in this case the hit ratio will be at most 86%, and the performance will suffer from the corresponding mispredicted branches at the architecture level.

Things will be even worse, if you have 3 or more different filters. In this case isOK() will be a megamorphic call which HotSpot cannot inline at all. So, the compiled code will contain a true interface call which has a larger performance impact.

More about speculative inlining in the article The Black Magic of (Java) Method Dispatch.

Conclusion

In order to inline virtual/interface calls, HotSpot JVM collects type profiles per invoke bytecode. If there is a virtual call in a loop, there will be just one type profile for the call, no matter if the loop is unrolled or not.

To get the best from the virtual call optimizations, you'd need to manually split the loop, primarily for the purpose of splitting type profiles. HotSpot cannot do this automatically so far.

apangin
  • 92,924
  • 10
  • 193
  • 247
  • thanks for the great answer. Just for completeness: are you aware of any JITC techniques that might produce code for a specific instance? – Alexander Dec 01 '19 at 13:07
  • @Alexander HotSpot does not optimize code for a specific instance. It uses runtime statistics that includes per-bytecode counters, type profile, branch target probabilities etc. If you want to optimize code for a specific case, create a separate class for it, either manually or with dynamic bytecode generation. – apangin Dec 01 '19 at 14:07