43

How can I run JMH benchmarks inside my existing project using JUnit tests? The official documentation recommends making a separate project, using Maven shade plugin, and launching JMH inside the main method. Is this necessary and why is it recommended?

Aleksandr Dubinsky
  • 22,436
  • 15
  • 82
  • 99

4 Answers4

73

I've been running JMH inside my existing Maven project using JUnit with no apparent ill effects. I cannot answer why the authors recommend doing things differently. I have not observed a difference in results. JMH launches a separate JVM to run benchmarks to isolate them. Here is what I do:

  • Add the JMH dependencies to your POM:

    <dependency>
      <groupId>org.openjdk.jmh</groupId>
      <artifactId>jmh-core</artifactId>
      <version>1.21</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.openjdk.jmh</groupId>
      <artifactId>jmh-generator-annprocess</artifactId>
      <version>1.21</version>
      <scope>test</scope>
    </dependency>
    

    Note that I've placed them in scope test.

    In Eclipse, you may need to configure the annotation processor manually. NetBeans handles this automatically.

  • Create your JUnit and JMH class. I've chosen to combine both into a single class, but that is up to you. Notice that OptionsBuilder.include is what actually determines which benchmarks will be run from your JUnit test!

    import java.util.ArrayList;
    import java.util.List;
    import java.util.Random;
    import java.util.concurrent.TimeUnit;
    import org.junit.Test;
    import org.openjdk.jmh.annotations.*;
    import org.openjdk.jmh.infra.Blackhole;
    import org.openjdk.jmh.runner.Runner;
    import org.openjdk.jmh.runner.options.*;
    
    
    public class TestBenchmark 
    {
    
          @Test public void 
        launchBenchmark() throws Exception {
    
                Options opt = new OptionsBuilder()
                    // Specify which benchmarks to run. 
                    // You can be more specific if you'd like to run only one benchmark per test.
                    .include(this.getClass().getName() + ".*")
                    // Set the following options as needed
                    .mode (Mode.AverageTime)
                    .timeUnit(TimeUnit.MICROSECONDS)
                    .warmupTime(TimeValue.seconds(1))
                    .warmupIterations(2)
                    .measurementTime(TimeValue.seconds(1))
                    .measurementIterations(2)
                    .threads(2)
                    .forks(1)
                    .shouldFailOnError(true)
                    .shouldDoGC(true)
                    //.jvmArgs("-XX:+UnlockDiagnosticVMOptions", "-XX:+PrintInlining")
                    //.addProfiler(WinPerfAsmProfiler.class)
                    .build();
    
                new Runner(opt).run();
            }
    
        // The JMH samples are the best documentation for how to use it
        // http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/
        @State (Scope.Thread)
        public static class BenchmarkState
        {
            List<Integer> list;
    
              @Setup (Level.Trial) public void
            initialize() {
    
                    Random rand = new Random();
    
                    list = new ArrayList<>();
                    for (int i = 0; i < 1000; i++)
                        list.add (rand.nextInt());
                }
        }
    
          @Benchmark public void 
        benchmark1 (BenchmarkState state, Blackhole bh) {
    
                List<Integer> list = state.list;
    
                for (int i = 0; i < 1000; i++)
                    bh.consume (list.get (i));
            }
    }
    
  • JMH's annotation processor seems to not work well with compile-on-save in NetBeans. You may need to do a full Clean and Build whenever you modify the benchmarks. (Any suggestions appreciated!)

  • Run your launchBenchmark test and watch the results!

    -------------------------------------------------------
     T E S T S
    -------------------------------------------------------
    Running com.Foo
    # JMH version: 1.21
    # VM version: JDK 1.8.0_172, Java HotSpot(TM) 64-Bit Server VM, 25.172-b11
    # VM invoker: /usr/lib/jvm/java-8-jdk/jre/bin/java
    # VM options: <none>
    # Warmup: 2 iterations, 1 s each
    # Measurement: 2 iterations, 1 s each
    # Timeout: 10 min per iteration
    # Threads: 2 threads, will synchronize iterations
    # Benchmark mode: Average time, time/op
    # Benchmark: com.Foo.benchmark1
    
    # Run progress: 0.00% complete, ETA 00:00:04
    # Fork: 1 of 1
    # Warmup Iteration   1: 4.258 us/op
    # Warmup Iteration   2: 4.359 us/op
    Iteration   1: 4.121 us/op
    Iteration   2: 4.029 us/op
    
    
    Result "benchmark1":
      4.075 us/op
    
    
    # Run complete. Total time: 00:00:06
    
    REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
    why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
    experiments, perform baseline and negative tests that provide experimental control, make sure
    the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
    Do not assume the numbers tell you what you want them to tell.
    
    Benchmark                                Mode  Cnt  Score   Error  Units
    Foo.benchmark1                           avgt    2  4.075          us/op
    Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.013 sec
    
  • Runner.run even returns RunResult objects on which you can do assertions, etc.

Aleksandr Dubinsky
  • 22,436
  • 15
  • 82
  • 99
  • 3
    This is not a recommended option to run tests under JMH. Unit-tests and other IDE interferes with the measurements. Do it right from the command line. – Ivan Voroshilin Jan 29 '16 at 20:49
  • @IvanVoroshilin I've tried it both ways and did not see a difference in results. Do you have concrete advice under what conditions this becomes a problem? – Aleksandr Dubinsky Jan 30 '16 at 17:55
  • 1
    The results are less reliable, it is just a recommendation. Eliminate the external factors. This gets in the way when it comes to microbenchmarking. – Ivan Voroshilin Jan 31 '16 at 05:58
  • 28
    @IvanVoroshilin Sounds like FUD spread by people who hate IDEs (I am referring to some of the core JVM developers, who also develop JMH). If we want to split hairs, we should also advise people to shutdown the window manager, stop all daemons, etc, etc. In practice, warming up and averaging over several iterations smooths out most timing noise. – Aleksandr Dubinsky Jan 31 '16 at 14:04
  • Forking should negate most possible side effects. – garkin Sep 21 '16 at 08:25
  • 4
    If only we had a benchmarking framework to measure the differences... ;) – dsmith Oct 12 '16 at 20:23
  • @AleksandrDubinsky are you sure that LookUtils belongs to standard libs or to those JMH dependencies? – JeanValjean Oct 11 '17 at 09:55
  • @JeanValjean I don't see a reference to "LookUtils" anywhere on this page. – Aleksandr Dubinsky Oct 12 '17 at 14:59
  • @AleksandrDubinsky weird! I probably got access to a cached version, then! Thanks! – JeanValjean Oct 13 '17 at 12:38
  • 1
    @AleksandrDubinsky I'd suggest to add the StackProfiler which prints very useful profiling results at the end: `.addProfiler(StackProfiler.class)` like: ....[Thread state: RUNNABLE]........................................................................ 50.0% 50.0% java.net.SocketInputStream.socketRead0 21.5% 21.5% com.mycompany.myapp.MyProfiledClass.myMethod 9.4% 9.4% java.io.WinNTFileSystem.getBooleanAttributes 4.7% 4.7% java.util.zip.ZipFile.getEntry 3.0% 3.0% java.lang.String.regionMatches ... – Pleymor Jun 07 '19 at 13:26
6

A declarative approach using annotations:

@State(Scope.Benchmark)
@Threads(1)
public class TestBenchmark {
       
    @Param({"10","100","1000"})
    public int iterations;


    @Setup(Level.Invocation)
    public void setupInvokation() throws Exception {
        // executed before each invocation of the benchmark
    }

    @Setup(Level.Iteration)
    public void setupIteration() throws Exception {
        // executed before each invocation of the iteration
    }

    @Benchmark
    @BenchmarkMode(Mode.AverageTime)
    @Fork(warmups = 1, value = 1)
    @Warmup(batchSize = -1, iterations = 3, time = 10, timeUnit = TimeUnit.MILLISECONDS)
    @Measurement(batchSize = -1, iterations = 10, time = 10, timeUnit = TimeUnit.MILLISECONDS)
    @OutputTimeUnit(TimeUnit.MILLISECONDS)
    public void test() throws Exception {
       Thread.sleep(ThreadLocalRandom.current().nextInt(0, iterations));
    }


    @Test
    public void benchmark() throws Exception {
        String[] argv = {};
        org.openjdk.jmh.Main.main(argv);
    }

}
Sergio
  • 3,317
  • 5
  • 32
  • 51
Cristian Florescu
  • 1,660
  • 20
  • 24
  • Code-only answers are frowned-upon. How is this solution different and/or better than the existing answer? How does calling jmh.Main cause the correct tests to be run? – Aleksandr Dubinsky Dec 27 '19 at 11:37
  • 1
    This is other simplified approach. That's all. – Cristian Florescu Dec 27 '19 at 18:42
  • I wasn't trying to criticize. I was listing questions that you should answer in the text of your post. It is bad to post some code without explanation. – Aleksandr Dubinsky Jan 02 '20 at 09:35
  • 1
    The difference is more or less obvious!?: The code obove provides the test-setup as annotations - the other one is a programmatic approach. Both have in common that JUnit is just used to start JMH. It´s a personal preference - I prefer the annotation approach. – cljk Jan 22 '20 at 12:51
  • 1
    Thanks, although I do get a message about "Unable to find the resource: /META-INF/BenchmarkList" – Luke Apr 01 '20 at 00:44
  • +1 Benchmark annotation-config is **shared** when running from Junit, build plugin or command-line. This supports running quick benchmarks from IDE (via Junit) and formal ones from build environment – drekbour Aug 01 '20 at 15:05
  • To make it more obvious what is happening and make it more similar to the above answer, the benchmark method should probably be converted to this: @Test public void benchmark() throws Exception { Options opt = new OptionsBuilder() .include(TestBenchmark.class.getSimpleName()) .build(); //how to run benchmark and collect results Collection runResults = new Runner(opt).run(); } – Chuck C Feb 04 '22 at 21:59
0
@State(Scope.Benchmark)
@Threads(1)
@Fork(1)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
@Warmup(iterations = 5, time = 1)
@Measurement(iterations = 5, time = 1)
@BenchmarkMode(Mode.All)
public class ToBytesTest {

  public static void main(String[] args) {
    ToBytesTest test = new ToBytesTest();
    System.out.println(test.string()[0] == test.charBufferWrap()[0] && test.charBufferWrap()[0] == test.charBufferAllocate()[0]);
  }

  @Test
  public void benchmark() throws Exception {
    org.openjdk.jmh.Main.main(new String[]{ToBytesTest.class.getName()});
  }

  char[] chars = new char[]{'中', '国'};

  @Benchmark
  public byte[] string() {
    return new String(chars).getBytes(StandardCharsets.UTF_8);
  }

  @Benchmark
  public byte[] charBufferWrap() {
    return StandardCharsets.UTF_8.encode(CharBuffer.wrap(chars)).array();
  }

  @Benchmark
  public byte[] charBufferAllocate() {
    CharBuffer cb = CharBuffer.allocate(chars.length).put(chars);
    cb.flip();
    return StandardCharsets.UTF_8.encode(cb).array();
  }
}
  • 1
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Mar 21 '22 at 08:42
  • Code-only answers are frowned upon. At the least, please explain how your answer is different from other, similar answers. – Aleksandr Dubinsky Mar 22 '22 at 07:47
0

You may write your own JUnit Runner to run benchmark. It allows you to run and debug benchmarks from Eclipse IDE

  1. write class extending org.junit.runner.Runner class

    public class BenchmarkRunner extends Runner {
      //...
    }
    
  2. implement constructor and few methods

    public class BenchmarkRunner extends Runner {
       public BenchmarkRunner(Class<?> benchmarkClass) {
       }
    
       public Description getDescription() {
        //...
       }  
    
       public void run(RunNotifier notifier) {
        //...
       }
    }
    
  3. add the runner to your test class

    @RunWith(BenchmarkRunner.class)  
    public class CustomCollectionBenchmark {
        //...
    }  
    

I've described it detailed in my blog post: https://vbochenin.github.io/running-jmh-from-eclipse

Vlad Bochenin
  • 3,007
  • 1
  • 20
  • 33