119

The canonical JVM implementation from Sun applies some pretty sophisticated optimization to bytecode to obtain near-native execution speeds after the code has been run a few times.

The question is, why isn't this compiled code cached to disk for use during subsequent uses of the same function/class?

As it stands, every time a program is executed, the JIT compiler kicks in afresh, rather than using a pre-compiled version of the code. Wouldn't adding this feature add a significant boost to the initial run time of the program, when the bytecode is essentially being interpreted?

Lii
  • 11,553
  • 8
  • 64
  • 88
Chinmay Kanchi
  • 62,729
  • 22
  • 87
  • 114

5 Answers5

33

Without resorting to cut'n'paste of the link that @MYYN posted, I suspect this is because the optimisations that the JVM performs are not static, but rather dynamic, based on the data patterns as well as code patterns. It's likely that these data patterns will change during the application's lifetime, rendering the cached optimisations less than optimal.

So you'd need a mechanism to establish whether than saved optimisations were still optimal, at which point you might as well just re-optimise on the fly.

skaffman
  • 398,947
  • 96
  • 818
  • 769
  • 7
    ...or you could just offer persistence as an *option*, like Oracle's JVM does -- empower *advanced programmers* to optimize their application's performance when and where they just **know** the patterns are **not** changing, under their responsibility. Why not?! – Alex Martelli Jan 02 '10 at 22:58
  • 2
    Because it's probably not worth it. If neither SUN, IBM nor BEA considered it worthwhile for their performance JVMs, there's going to be a good reason for it. Maybe their RT optimisation is faster than Oracle's, which is why Oracle caches it. – skaffman Jan 02 '10 at 23:00
  • 12
    Why not taking stored optimatsations as a starting point, to use what has been learned on previous runs? From there JIT could work as usual an re-optimise stuff. On shut-down, that code could be persisted again and be used in the next run as a new starting point. – Puce Jul 25 '12 at 15:25
  • 1
    @Puce The only reason I can think about is that AFAIK you get no profiling stats from running optimized code. So you'd have no way to improve... – maaartinus Jul 24 '14 at 02:05
  • 2
    I would personally be fine with a "just persist the JIT profiling information between runs" option with all the warnings that "this will only be valid with exact same JVM, same data etc and otherwise ignored". Regarding why this has not been implemented, I would expect that the added complexity of persisting and validating the JIT seed data was too much to take resources from other projects. Given the choice between this and Java 8 lambda+streams I'd rather have the latter. – Thorbjørn Ravn Andersen Jan 01 '15 at 19:20
  • @ThorbjørnRavnAndersen and exactly same os and cpu, isn't it? I think rather than the JVM, since code has already been compiled to native code, the OS and the cpu should matter. – Koray Tugay Jan 25 '15 at 09:55
  • 1
    @KorayTugay Not necessarily. The profiling information I am thinking about is "this method is frequently called, inline it!", which is a hint to the JIT compiler. The actual machine code would be generated for each run - it is the warming up phase you are optimizing out. – Thorbjørn Ravn Andersen Jan 25 '15 at 11:20
  • 1
    @ThorbjørnRavnAndersen But the question is about caching the JIT compiled code. And the JIT compiled code is architecture / os specific. – Koray Tugay Jan 25 '15 at 11:24
  • 1
    @KorayTugay ... which would get really exciting on a shared file-system. I can imagine the whole thing being a nightmare. – SusanW Aug 16 '16 at 20:00
  • Meaning based questions are not allowed here. I really wonder why this question is still there while other similar questions get quickly deleted. – Stefan Feb 13 '21 at 17:49
32

Oracle's JVM is indeed documented to do so -- quoting Oracle,

the compiler can take advantage of Oracle JVM's class resolution model to optionally persist compiled Java methods across database calls, sessions, or instances. Such persistence avoids the overhead of unnecessary recompilations across sessions or instances, when it is known that semantically the Java code has not changed.

I don't know why all sophisticated VM implementations don't offer similar options.

Alex Martelli
  • 854,459
  • 170
  • 1,222
  • 1,395
20

An updated to the existing answers - Java 8 has a JEP dedicated to solving this:

=> JEP 145: Cache Compiled Code. New link.

At a very high level, its stated goal is:

Save and reuse compiled native code from previous runs in order to improve the startup time of large Java applications.

Hope this helps.

alex
  • 10,900
  • 15
  • 70
  • 100
Eugen
  • 8,523
  • 8
  • 52
  • 74
8

Excelsior JET has a caching JIT compiler since version 2.0, released back in 2001. Moreover, its AOT compiler may recompile the cache into a single DLL/shared object using all optimizations.

Dmitry Leskov
  • 3,233
  • 1
  • 20
  • 17
  • 4
    Yes, but the question was about the canonical JVM, i.e., Sun's JVM. I'm well aware that there are several AOT compilers for Java as well as other caching JVMs. – Chinmay Kanchi Jan 21 '10 at 22:24
1

I do not know the actual reasons, not being in any way involved in the JVM implementation, but I can think of some plausible ones:

  • The idea of Java is to be a write-once-run-anywhere language, and putting precompiled stuff into the class file is kind of violating that (only "kind of" because of course the actual byte code would still be there)
  • It would increase the class file sizes because you would have the same code there multiple times, especially if you happen to run the same program under multiple different JVMs (which is not really uncommon, when you consider different versions to be different JVMs, which you really have to do)
  • The class files themselves might not be writable (though it would be pretty easy to check for that)
  • The JVM optimizations are partially based on run-time information and on other runs they might not be as applicable (though they should still provide some benefit)

But I really am guessing, and as you can see, I don't really think any of my reasons are actual show-stoppers. I figure Sun just don't consider this support as a priority, and maybe my first reason is close to the truth, as doing this habitually might also lead people into thinking that Java class files really need a separate version for each VM instead of being cross-platform.

My preferred way would actually be to have a separate bytecode-to-native translator that you could use to do something like this explicitly beforehand, creating class files that are explicitly built for a specific VM, with possibly the original bytecode in them so that you can run with different VMs too. But that probably comes from my experience: I've been mostly doing Java ME, where it really hurts that the Java compiler isn't smarter about compilation.

JaakkoK
  • 8,247
  • 2
  • 32
  • 50
  • 1
    there is a spot in the classfile for such things, infact that was the original intent (store the JIT'ed code as an attribute in the classfile). – TofuBeer Jan 02 '10 at 19:29
  • @TofuBeer: Thanks for the confirmation. I suspected that might be the case (that's what I would have done), but wasn't sure. Edited to remove that as a possible reason. – JaakkoK Jan 02 '10 at 20:28
  • I think you hit the nail on the head with your last bullet point. The others could be worked around, but that last part is, I think, the main reason JITed code is not persisted. – Sasha Chedygov Jan 02 '10 at 20:31
  • 1
    The last paragraph about the explicit bytecode-to-native compiler is what you currently have in .NET with NGEN (http://msdn.microsoft.com/en-us/library/6t9t5wcf(VS.71).aspx). – R. Martinho Fernandes Jan 02 '10 at 20:42