It is possible, but it would be a waste of time.
As you're probably aware, ScalaMeter is designed to remove the effects of variation in function execution times, so that it's possible to accurately benchmark those execution times. For example, you might want to verify that a function completes within a required time, or to determine whether its performance is maintained over time as changes are made to the code base.
Why is that so challenging? Well, there's a number of obstacles to overcome:
- The JVM has a number of different options for executing the resulting Java bytecode in a program. Some (such as the Zero VM) just interprets code; others utilize just-in-time (JIT) compilation to optimize translation into the host CPU's machine code; the HotSpot Server VM aggressively improves performance over time, so that code performance incrementally improves the longer it runs. For benchmarking purposes, the HotSpot Client VM performs very good optimization and reaches a steady-state quickly, which therefore allows us to start measuring performance rapidly. However, we still need to allow the JIT compiler to warm up, and so we must disregard the first few, slower executions (runs) that would otherwise bias our results. ScalaMeter does a pretty good job of undertaking this warmup by itself, but the number of runs to be discarded is configurable.
- The JVM performs a number of garbage collection (GC) cycles, seemingly at random, which can similarly slow down performance when they occur. ScalaMeter can be configured to ignore executions in which GC cycles occurred.
- The host machine's load can vary as it executes threads from other processes running on the same machine. These also potentially slow down execution times. ScalaMeter deals with this by considering only the fastest observed time in a fixed number of runs, rather than by taking an average.
- If you're running from SBT, a forked JVM execution session will perform better, and with less variation, than one that shares the same JVM instance as SBT (because more of the SBT JVM's resources will be in use).
- Virtual memory page faults (in which the memory making up the application's working set is switched to/from a paging file) will also randomly impact performance.
- The performance of many functions will depend upon its arguments (and, if you're not into functional programming, shared mutuble state). Tying performance to argument values is also something ScalaMeter is good at, through it's use of generators. (For example, consider a
size
operation on a List
—it will clearly take longer to execute as the number of elements in the List
increases.)
- Etc. You can find more on these issues in the ScalaMeter Getting Started Introduction.
Clearly, benchmarks should be performed on the same host machine so that the results are comparable, since CPU, OS, Memory, BIOS Config, etc. all affect performance too.
So, having explained all that, you will understand why ScalaMeter needs to execute the same function a lot! ;-)
In your case, doSomething()
takes no arguments, so you can use a Gen[T].single
generator that identifies the class or object to which doSomething()
belongs, which will look something like the following:
Note: This is written as a ScalaMeter test, and so the source should be under src/test/scala
:
import org.scalameter.api._
import org.scalameter.picklers.Implicits._
object MyBenchmark
extends Bench.ForkedTime {
// We have no arguments. Instead, create a single "generator" that identifies the class or
// object that doSomething belongs to. This assumes doSomething() belongs to object
// MyObject.
val owner = Gen.single("owner")(MyObject)
// Measure MyObject.doSomething()'s performance.
performance of "MyObject" in {
measure method "doSomething()" in {
using(owner) in {
_.doSomething()
}
}
}
}
(BTW: I would have thought that benchmarking functions with no arguments would be more straightforward than this, but this is the best I've been able to come up with so far. If anyone has a better idea, please add a comment and let me know!)
So, if all of that is overkill, you might want to try something like this:
// Measure nanoseconds taken to execute by name argument.
def measureTime(x: => Unit): Long = {
val start = System.nanoTime()
x
// Calculate how long that took and return the value.
System.nanoTime() - start
}
measureTime {
doSomething()
}
You'll only execute the function once, and the time taken will be wildly different each time.