We have an OSGi container with a lot of products running inside, one of them being our product.
We have some performance tests running and there is this weird problem that, every OSGi container restart will result in a performance deviation for some of our tests up to 400%.
Through some testing and things I was able to track this down to this method:
public static Method getMethodForSlotKey(Class<?> cls, String slotKey, String methodType) {
Method[] methods = cls.getMethods();
if (methods != null && methods.length > 0) {
for (Method method : methods) {
String methName = method.getName();
if (methName.startsWith(methodType)) {
IDataAnnotation annot = method.getAnnotation(IDataAnnotation.class);
if (annot != null) {
String annotSlotKey = annot.SlotKey();
if (annotSlotKey != null && annotSlotKey.equals(slotKey)) {
Class<?>[] paramTypes = method.getParameterTypes();
// for now, check length == 1 for setter and 0 for getter.
int len = SET_TXT.equals(methodType) ? 1 : 0;
if (paramTypes != null && paramTypes.length == len) {
return method;
}
}
}
}
}
}
return null;
}
This method mainly does reflection and string comparison.
Now, what I did is to cache the results of this method and instantly our deviation goes down to 10 - 20%. Of course, this method is called often, so that there is am improvement is obvious.
Still I don't understand why the non-cached version has such a high deviation with the only difference being a OSGi / JVM restart? What exactly may happen during the restart? Are there any known performance issues for different classloaders for instance? Is it possible that in the OSGi environment libraries will be loaded in a different order between the restarts?
I am searching for a clue here for this to make sense.
UPDATE
It turns out that this call:
Method[] methods = cls.getMethods();
is causing the deviation. I still don't understand it, so if anyone does I'd be happy to hear about it.