0

I am interested in this question and have tried a couple of things.

Generally one can use products such as JProfiler or VisualVM to perform a profiling of CPU and search for the methods that under investigation.

Also, one may use libraries such as JETM, CodaHale Metrics, or Netflix Servo to introduce measuring points in the application. However, the issue with this approach is that for third-party libraries it may not work because the libraries expect you to introduce measuring points inside the application. This is the most interesting approach for me since they are minimal in the amount of code they introduce.

Last but not least, one may also use approached based on AOP such as Spring AOP to measure specific executions of third party libraries. Since the project is not using AOP at all, I'd rather not introduce new dependencies for the only purpose of this measurement.

In my case, for instance, I would like to measure the execution of logging methods such as the mixture of JCL Log and log4j Logger implementation. I would like to use a JUnit RunListener with Maven Surefire and measure this execution for all the unit tests that I have.

So, any ideas? I'd appreciate suggestions.

nobeh
  • 9,784
  • 10
  • 49
  • 66
  • I would use a profiler for the packages which match your libraries. It is usually best to profile all your code and focus of the most significant ones otherwise you can spend time worrying about packages which are relatively small. – Peter Lawrey Oct 01 '12 at 08:35
  • If you want to know (roughly) how much time something takes, run it a million times, stopwatch the time, and divide by a million. – Mike Dunlavey Oct 01 '12 at 12:33
  • @MikeDunlavey Thanks but consider the fact that I do not want to benchmark a specific section; I'm interested to profile "as-close-as-possible-to-real" execution of the code sections/methods. – nobeh Oct 03 '12 at 09:29
  • Then [*this*](http://stackoverflow.com/a/378024/23771) is the method I would use. I run it under a semi-realistic workload, and grab a bunch of stack samples at pseudo-random times. There are multiple ways to do that. Then if the whole thing took 100 seconds, say, and Foo is on 30% of the samples, it accounts for about 30 of those 100 seconds. That's what would be saved if I didn't call it. If I know I called it 1000 times, that means on average it cost 30 milliseconds per call. The math is pretty simple. If precision of measurement is a concern, just get more stack samples. – Mike Dunlavey Oct 03 '12 at 13:24

0 Answers0