I am running the following experiment and surprisingly realised that there is no measureable difference between the two runs:
public static void main(String[] args) throws Exception {
long count = 1_000_000_0L;
long measureCount = 1000L;
// Measurement 1
SpecificRecord tp1 = null;
List<Long> times = new ArrayList<>();
for (long j=0; j<measureCount; ++j) {
long timeStart = System.currentTimeMillis();
for (long i = 0; i < count; ++i) {
tp1 = new WebPageView();
}
times.add(System.currentTimeMillis() - timeStart);
}
Stats st = Stats.of(times);
double avg = st.mean();
double stdDev = st.populationStandardDeviation();
times.sort(Long::compareTo);
int upper = (int)Math.ceil(times.size()/2.0);
int lower = (int)Math.floor(times.size()/2.0);
double median = ((times.get(upper))+(times.get(lower)))/2.0;
System.out.println("avg: "+avg);
System.out.println("stdDev: "+stdDev);
System.out.println("median: "+median);
System.out.println(tp1);
// Measurement 2
SpecificRecord tp2 = null;
List<Long> times2 = new ArrayList<>();
for (long j=0; j<measureCount; ++j) {
long timeStart = System.currentTimeMillis();
for (long i = 0; i < count; ++i) {
tp2 = WebPageView.class.newInstance();
}
times2.add(System.currentTimeMillis() - timeStart);
}
Stats st2 = Stats.of(times2);
double avg2 = st2.mean();
double stdDev2 = st2.populationStandardDeviation();
times2.sort(Long::compareTo);
int upper2 = (int)Math.ceil(times2.size()/2.0);
int lower2 = (int)Math.floor(times2.size()/2.0);
double median2 = ((times2.get(upper2))+(times2.get(lower2)))/2.0;
System.out.println("avg: "+avg2);
System.out.println("stdDev: "+stdDev2);
System.out.println("median: "+median2);
System.out.println(tp2);
}
}
The results:
avg: 110.63300000000005
stdDev: 47.07256431298379
median: 100.0
{"aid": 0, "uid": 0, "rid": 0, "sid": 0, "d": null, "p": null, "r": null, "f": null, "q": null, "ts": 0}
avg: 101.0840000000001
stdDev: 7.8092857547921835
median: 99.0
{"aid": 0, "uid": 0, "rid": 0, "sid": 0, "d": null, "p": null, "r": null, "f": null, "q": null, "ts": 0}
Update1:
Many of you pointed out that it is impossible to benchmark the JVM this way due to heavy optimization that masquarades the performance difference between new Something() and Something.class.newinstance().
Update2:
After repeating the test with the suggested methodology the results are kind of surprising to me:
Benchmark Mode Cnt Score Error Units
ReflectionTest.newInstance avgt 5 12.923 ± 0.801 ns/op
ReflectionTest.newOperator avgt 5 11.524 ± 0.289 ns/op
Update3: The test code:
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;
import java.util.concurrent.TimeUnit;
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@Warmup(iterations = 5, time = 10, timeUnit = TimeUnit.SECONDS)
@Measurement(iterations = 5, time = 100, timeUnit = TimeUnit.SECONDS)
@State(Scope.Benchmark)
public class ReflectionTest {
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder().include(ReflectionTest.class.getSimpleName()).forks(1).build();
new Runner(opt).run();
}
@Benchmark
public WebPageView newOperator() {
return new WebPageView();
}
@Benchmark
public WebPageView newInstance() throws InstantiationException, IllegalAccessException {
return WebPageView.class.newInstance();
}
}
This question sill remains unanswered, there is not test that could outline any difference between class.newInstance() vs new for the class we use. This class uses is an implementation of Avro SpecificRecord.
public class WebPageView extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord