4

I have a method, which takes much time to execute first time. But after several invocations, it takes about 30 times less time. So, to make my application respond to user interaction faster, I "warm-up" this method (5 times) with some sample data on initialization of application. But this increases app start-up time.
I read, that JVM's can optimize and compile my java code to native, thus speeding things up. I wanted to know - maybe there is some way to explicitly tell JVM that I want this method to be compiled on startup of application?

Rogach
  • 26,050
  • 21
  • 93
  • 172
  • 1
    Are you really sure that it performs better after being invoked several times exactly because of JVM optimizations. Depending on what your method does, it could be something else, too - for example, disk or database caching... – Goran Jovic Jan 07 '11 at 14:22
  • @Rogach: can't you first optimize your method? What kind of stuff does it do? – Gugussee Jan 07 '11 at 14:24
  • What exactly does this method do? – Cratylus Jan 07 '11 at 14:24
  • The method is way too big post it here, but basically it takes collection of something like Path2D and groups them together based on what path contains another path. The longest part of it is checking for self-intersection of shape, which is done using Shamos-Hoey algorithm. The speed of execution doesn't seem to be based on chaching - I can gave the method various sample sets of data (and various actual data too), and the effect is still the same - methods becomes faster after it was executed. – Rogach Jan 07 '11 at 14:40
  • If someone can suggest a place where I can post my code without fear that the code is too big, I'll put the source of method there. – Rogach Jan 07 '11 at 14:43
  • pastebin.com accepts up to 1MB of txt. – the.duckman Jan 07 '11 at 14:48
  • 1
    http://pastebin.com/j5RAxs0s - Here go two source files of my app. The method in question is Litera.prepareLiteras() – Rogach Jan 07 '11 at 14:58

6 Answers6

6

The JVM does JiT (Just in Time) optimizations at runtime. This means that it can analyze how your code is executing and make optimizations to deliver runtime performance increases. This happens at runtime. If you are seeing the method get faster after a few executions, it is probably because of the JiT optimizations (unless your analysis is flawed and, say, the method gets faster because the data gets simpler). If you analysis is correct, compiling to native might actually hurt you because you will no longer get runtime optimizations.

Can we see the method? You might be able to make it faster without having to worry about how the JVM works. You should isolate exactly where the most costly operations are. You should also verify that this is not some sort of garbage collection issue. i.e. maybe the method is fine, but there is a GC going on that is chewing up time, and when it is done your method runs at acceptable speeds.

hvgotcodes
  • 118,147
  • 33
  • 203
  • 236
  • regarding the second paragraph, is there a good tool to profile the execution of a Java program? Something like Shark for example (afaik, Shark! works only for Macintosh, and for "C-like" code) – posdef Jan 07 '11 at 14:41
  • @posdef, i just searched SO for 'best free java profiler' and get http://stackoverflow.com/questions/163722/which-java-profiler-is-better-jprofiler-or-yourkit – hvgotcodes Jan 07 '11 at 14:46
4

the JIT optimizations work so well precisely because they optimize what your code actually does, and not what it could do in different instances.

It's even possible that the JITted code is different on different runs because of different input data. Or even it could be reoptimized more than once when circumstances change.

In other words: without real data, the JVM won't do a good job optimizing code. (i.e. it can only do 'static' optimizations)

But in the end, if you're getting so high improvement (30x is a lot!), it's quite likely that it's either

  • not the code but something else (like file or database caches)
  • very non-optimal code at the source level. (like some heavy calculations that could go out of tight loops)

EDIT:

After looking at your code, in a big loop on Literas.prepareLiteras(), you're continuously calling path.contains(p) with different points but the same path. SimplePath.contains() creates a bounding shape each time it's called, so you end up creating the same shape again and again. That's a prime example of something that should be pulled out of the inner loop.

I don't think the JIT can optimize that whole method away, but in some extreme cases it might convert the getShape() into something specialized for a single path, and recompile again for the next path. Not a good use of JVM smarts, eh?

Javier
  • 60,510
  • 8
  • 78
  • 126
3

java.lang.Compiler.compileClass

Brett Kail
  • 33,593
  • 2
  • 85
  • 90
2

If you use the Sun JVM, there are different thresholds for JIT compilation, depending on whether you use the client or server JVM. For client it is 1500 calls to a method, for server it's 10000. You can change this to a very low value using the JVM param -XX:CompileThreshold=100.

Such a low threshold won't benefit your global performance, though. I only suggest using it, to test whether the performance improvement by warm-up is affected by JIT.

I've never seen an improvement with the factor 30 by warm-up, which was due to JIT optimizations. Yet. It was always due to some caches.

the.duckman
  • 6,376
  • 3
  • 23
  • 21
  • Wow. It nearly kills the app. Yes, it seems that improvement is not the result of JIT. That parameter actually makes things much slower. – Rogach Jan 07 '11 at 15:03
  • 2
    Well, then try a profiler. Current JDKs include JVisualVM, which comes with a basic profiler. – the.duckman Jan 07 '11 at 15:09
2

You can try running this on a 64-bit JVM, if you have a 64-bit operating system.

There are two versions of the JVM in Oracle's implementation: the client VM and the server VM. On 32-bit Windows, the client VM is the default. On 64-bit Windows, the server VM is the default.

The difference between the client and server VM is in how they are tuned: the server VM does more aggressive optimizations (and does them earlier) than the client VM. The server VM has optimized settings for long running processes. The client VM has default settings that are optimized for desktop use: it does less optimization up front, but starts up quicker.

I've experienced major speed differences in calculation-intensive programs; these sometimes run twice as fast on a 64-bit JVM compared to a 32-bit JVM.

Jesper
  • 202,709
  • 46
  • 318
  • 350
1

Mostly I'd ditto hvgotcodes, but it's also possible that the issue is not JVM optimization, but that after the first few runs through data coming from a disk is now in cache, or that the first few times it is still loading and initializing classes but after that they're all in memory.

Jay
  • 26,876
  • 10
  • 61
  • 112