I've developed a small library that enables declarative argument validation using annotations, something like this:
@Validate({"not null | number"})
public static Integer notNullAnnotated(Integer number) {
return number++;
}
now I'm benchmarking this against pure java code version:
public static Integer notNullInline(Integer number) {
if (number != null) {
return number++;
} else {
throw new IllegalArgumentException("argument should not be null");
}
}
and here is the test:
@Test
public void performanceTest() {
long time = System.nanoTime();
for (int i = 0; i < iterationCount; i++) {
notNullAnnotated(i);
}
System.out.println("time annotated : " + (System.nanoTime() - time));
time = System.nanoTime();
for (int i = 0; i < iterationCount; i++) {
notNullInline(i); // TODO does compiler do any optimization here?
}
System.out.println("time inline : " + (System.nanoTime() - time));
}
I know that this is not the intended way to do benchmark tests. Right now I'd rather avoid adding any utility libraries for this simple test (as even this way results are good), but I'd like to know if compiler does any optimization here?