0

Just a quick question, not a "I need help!" question, more to satisfy my curiosity :)

We have wrote our own custom jackson Json serialize and deserializer that uses reflection to serialize/deserialize an object. For example, for the serializer, it would look up an object's properties by using Introspector.getBeanInfo(), get the PropertyDescriptor the the current object field and call the read or write method to get and set the values when needed.

At first, this would take quite some time to do (250-500 milliseconds), however after so many calls to the serializer we noticed that this would drop drastically, to around 25-50 milliseconds. From looking around the internet, from what I can gather the JVM can optimize reflection, but how does it do this? Is it actually keeping track of each call to get the read or write methods and working out the bytecode so that the reflection part is skipped out?

KingTravisG
  • 1,316
  • 4
  • 21
  • 43
  • 6
    I believe it's the JIT that kicked in rather than some reflection optimization. – Random42 Dec 22 '13 at 17:21
  • @m3th0dman Most likely it is both. The JIT optimises all the code, including reflection. – Peter Lawrey Dec 22 '13 at 17:56
  • 25-50 milliseconds is still a very long time. A short JSON message should be closer to a 1 ms. – Peter Lawrey Dec 22 '13 at 17:57
  • The objects themselves are pretty complicated, so getting it down to 25-50 we believe is still pretty fast, some of the smaller objects are pretty small and take around .5 - 1 ms which is pretty impressive :) – KingTravisG Dec 22 '13 at 17:59

2 Answers2

3

Reflective method calls are optimized after 16 invocations (the default inflation threshold) of a particular method. The optimized version relies on generated bytecode, which means that there is basically no more reflection involved.

However, that optimization only concerns the overhead of calling invoke on an already known Method instance, while most of the overhead of using reflection stems from member lookup. This aspect will surely benefit from JIT compilation, which by default occurs after 10,000 passes over the same piece of code.

Lookup optimization may also occur within Jackson itself, by caching the Method instances.

Marko Topolnik
  • 195,646
  • 29
  • 319
  • 436
  • Note also that Java uses a `SoftReference` cache for `Method` instances. – Sotirios Delimanolis Dec 22 '13 at 17:29
  • Ah right - we were thinking of caching the PropertyDescriptors in a HashMap of some sort so that we could avoid having to look them up again after the first iteration - at least we know now that it would be worthwhile – KingTravisG Dec 22 '13 at 17:33
  • Yes, caching the PropertyDescriptors give a noticable boost in speed - however, it wasn't until about 100-150 iterations in the the time taken was reduced to less than 10-15 milliseconds each time, which is quite a drop considering the size of the objects being serialized – KingTravisG Dec 22 '13 at 17:50
  • So you are experiencing both the advantage in hashed lookup *and* JIT compilation. – Marko Topolnik Dec 22 '13 at 17:54
1

See my answer here:

Java benchmarking - why is the second loop faster?

That may well explain what you are seeing here too.

Community
  • 1
  • 1
Tim B
  • 40,716
  • 16
  • 83
  • 128
  • Yeah - we have a JUnit test that runs the serialization 10,000 times to get an average time, and prints the time every 100th iteration. Run this test suite by itself we get roughly 200-300 ms average, run this suite with the rest of the tests we get results around 50-100 ms! – KingTravisG Dec 22 '13 at 17:32