82

I've been thinking about it lately, and it seems to me that most advantages given to JIT compilation should more or less be attributed to the intermediate format instead, and that jitting in itself is not much of a good way to generate code.

So these are the main pro-JIT compilation arguments I usually hear:

  1. Just-in-time compilation allows for greater portability. Isn't that attributable to the intermediate format? I mean, nothing keeps you from compiling your virtual bytecode into native bytecode once you've got it on your machine. Portability is an issue in the 'distribution' phase, not during the 'running' phase.
  2. Okay, then what about generating code at runtime? Well, the same applies. Nothing keeps you from integrating a just-in-time compiler for a real just-in-time need into your native program.
  3. But the runtime compiles it to native code just once anyways, and stores the resulting executable in some sort of cache somewhere on your hard drive. Yeah, sure. But it's optimized your program under time constraints, and it's not making it better from there on. See the next paragraph.

It's not like ahead-of-time compilation had no advantages either. Just-in-time compilation has time constraints: you can't keep the end user waiting forever while your program launches, so it has a tradeoff to do somewhere. Most of the time they just optimize less. A friend of mine had profiling evidence that inlining functions and unrolling loops "manually" (obfuscating source code in the process) had a positive impact on performance on his C# number-crunching program; doing the same on my side, with my C program filling the same task, yielded no positive results, and I believe this is due to the extensive transformations my compiler was allowed to make.

And yet we're surrounded by jitted programs. C# and Java are everywhere, Python scripts can compile to some sort of bytecode, and I'm sure a whole bunch of other programming languages do the same. There must be a good reason that I'm missing. So what makes just-in-time compilation so superior to ahead-of-time compilation?


EDIT To clear some confusion, maybe it would be important to state that I'm all for an intermediate representation of executables. This has a lot of advantages (and really, most arguments for just-in-time compilation are actually arguments for an intermediate representation). My question is about how they should be compiled to native code.

Most runtimes (or compilers for that matter) will prefer to either compile them just-in-time or ahead-of-time. As ahead-of-time compilation looks like a better alternative to me because the compiler has more time to perform optimizations, I'm wondering why Microsoft, Sun and all the others are going the other way around. I'm kind of dubious about profiling-related optimizations, as my experience with just-in-time compiled programs displayed poor basic optimizations.

I used an example with C code only because I needed an example of ahead-of-time compilation versus just-in-time compilation. The fact that C code wasn't emitted to an intermediate representation is irrelevant to the situation, as I just needed to show that ahead-of-time compilation can yield better immediate results.

N J
  • 27,217
  • 13
  • 76
  • 96
zneak
  • 134,922
  • 42
  • 253
  • 328
  • I'm not sure what you're arguing. You're saying that the benefits ascribed to JITted code are really the result of the intermediate format, and then wondering why that intermediate format is so prevalent? – Anon. Jan 21 '10 at 02:10
  • 1
    No, I'm not arguing against the intermediate format is prevalent. I'm arguing against why it's necessary to compile that intermediate format just-in-time instead of, say, ahead-of-time during the installation phase. – zneak Jan 21 '10 at 02:17
  • 4
    Such an interesting question. I read all the posts and none of the arguments presented convinced me. I still don't understand why languages like Java do not compile the bytecodes to native code beforehand (it will do that anyway using JIT, so why not compile all the code before it gets executed?). I never saw a code compiled natively running slower than a code compiled using JIT (even after the later being running for weeks) so the arguments of "better performance" in favor of JIT don't make any sense to me. – Bitcoin Cash - ADA enthusiast Jan 28 '13 at 23:15
  • @TiagoT, with some more experience now (this question is 3 years old, can you believe it?), I'm going to put more emphasis on the fact that you don't need to recompile bytecode programs when a class in an external library changes. This is a __huge__ advantage for object-oriented systems like Java and the CLR. – zneak Jan 29 '13 at 00:08
  • @zneak I was rereading your answer now. Indeed, that is an advantage, but not a performance advantage. When I said I don't see the advantage of JIT over AOT compilation I am talking mainly about performance. Funny thing is that Google, with KitKat 4.4, is now testing an AOT compiler for Android too... so I guess it will eventually replaces the current JIT compiler, something I imagined would happen sooner or later. I really can't get my mind around on why so many systems moved to JIT (instead of sticking to AOT) in the first place... – Bitcoin Cash - ADA enthusiast Apr 29 '14 at 04:05
  • @Tiago: not needing to recompile you program when a library changes *is* a performance advantage, if you consider that this feature is so crucial, that native applications use it as well, by deploying applications and libraries separately, to be linked at runtime, or get linked to operating system provided libraries then. The difference is that JIT compiled code gets linked to the actual library version first, followed by inlining and optimizations, before the final code gets executed. This opportunity does not exist for the native applications, where every library function is a black box… – Holger Oct 06 '17 at 07:44
  • @Holger What kind of scenario with external library update / sw distribution do you have in mind? If any component of our application changes (our code, library dependency code) we need to distribute app update anyway, right? I believe most of app updates are driven by our app code changes (new features, bug fixes), but not library changes. So your argument is more about that every update will need recompilation? Again, we are not talking about development phase, as developer I am fine to use JIT and update libraries as many times as I want during development without recompilation. – Aleksandr Ivannikov Nov 21 '17 at 14:24
  • @AleksandrIvannikov: do you recompile your installed applications when installing a new codec? Or when updating your graphics driver? These are examples of native libraries whose code never gets inlined into the application code. In the context of Java, the simplest example is running your application on a newer JRE, benefiting from bug fixes and performance improvements. Or dropping any new SPI implementation into the env, say input methods, ImageIO, charsets, file systems, JNDI, auth, JDBC drivers, etc, all extensible at runtime by the user without needing the developer to recompile… – Holger Nov 21 '17 at 15:17
  • @Holger Your codec example actually contradicts with your library argument and gives +1 to AOT. It proves that even if both parts of app are in binary form (for example .exe and .dll from win world), it is possible to update one of it (.dll) without changing another. Same applies to JRE, strictly speaking this just another big library, when newer JRE comes, we recompile it and dynamically link to it from other already compiled apps. – Aleksandr Ivannikov Nov 21 '17 at 15:59
  • @AleksandrIvannikov: that’s what I already said in [this comment](https://stackoverflow.com/questions/2106380/what-are-the-advantages-of-just-in-time-compilation-versus-ahead-of-time-compila?noredirect=1#comment80153324_2106380): “*this feature is so crucial, that native applications use it as well*” (linking exe and dll), even if the code can not get inlined and optimized together like in a JIT environment. This is *not* the same with a Java application as “we” do *not* recompile when the JRE (or a library) changes, the JRE does compile+optimize *after* linking, automatically. – Holger Nov 21 '17 at 17:15
  • @Holger: so, your argument for the JIT is kind of traditional one - about runtime optimisation. You are saying that performance of "JIT overhead + inlining library code" is better then "AOT + calling library code". Is this correct? – Aleksandr Ivannikov Nov 22 '17 at 09:15
  • @AleksandrIvannikov: it’s rather that you don’t have to make a trade off between target system’s modularity and performance. Further, it’s more than “inlining library code”, as the inlining is only the starting point for all other optimizations done on the combined code. However, I’m not claiming that “JIT overhead + JIT advantage” is always better than AOT. AOT requires carefully chosen trade offs between modularization and performance, also, incorporating profiling data for optimizing hot code paths requires additional efforts (more than often not made). But if done right, AOT likely wins. – Holger Nov 22 '17 at 09:51

9 Answers9

41
  1. Greater portability: The deliverable (byte-code) stays portable

  2. At the same time, more platform-specific: Because the JIT-compilation takes place on the same system that the code runs, it can be very, very fine-tuned for that particular system. If you do ahead-of-time compilation (and still want to ship the same package to everyone), you have to compromise.

  3. Improvements in compiler technology can have an impact on existing programs. A better C compiler does not help you at all with programs already deployed. A better JIT-compiler will improve the performance of existing programs. The Java code you wrote ten years ago will run faster today.

  4. Adapting to run-time metrics. A JIT-compiler can not only look at the code and the target system, but also at how the code is used. It can instrument the running code, and make decisions about how to optimize according to, for example, what values the method parameters usually happen to have.

You are right that JIT adds to start-up cost, and so there is a time-constraint for it, whereas ahead-of-time compilation can take all the time that it wants. This makes it more appropriate for server-type applications, where start-up time is not so important and a "warm-up phase" before the code gets really fast is acceptable.

I suppose it would be possible to store the result of a JIT compilation somewhere, so that it could be re-used the next time. That would give you "ahead-of-time" compilation for the second program run. Maybe the clever folks at Sun and Microsoft are of the opinion that a fresh JIT is already good enough and the extra complexity is not worth the trouble.

Thilo
  • 257,207
  • 101
  • 511
  • 656
  • 12
    For your first and second points, what if I just took the bytecode and compiled it ahead-of-time on the end user's machine, for his specific system, during the installation process? This is why I say these advantages are relative to the intermediate format and not to just-in-time compilation. As of your third point, yeah, I guess it's true. However, if my native code runs twice as fast today as my jitted code runs now, I'm not quite interested in its performances 10 years from now. – zneak Jan 21 '10 at 02:06
  • Then you'd have to bundle the ahead-of-time compiler with your program. Which breaks portability, because your compiler needs to be native-code. – Anon. Jan 21 '10 at 02:13
  • 10
    @Anon: it should be stated that the requirements for just-in-time compilation and ahead-of-time compilation are the same. You can't run .NET programs without the .NET framework, so virtual machines already broke portability there. – zneak Jan 21 '10 at 02:15
  • So what you are asking is: "Why do we not have an installation phase that turns byte-code into machine code?" I suppose the difference to JIT is that dynamic adaptation would go away. On the other hand, no time constraints. Maybe it is not worth the extra complexity (the installation phase). Don't know. – Thilo Jan 21 '10 at 02:16
  • 1
    @Anon: No, it would be using the same (or very similar) compiler that the JVM or CLR is using now. – Thilo Jan 21 '10 at 02:17
  • 7
    Sorry for necroing. Looking at JIT generated (optimized) machine code for .NET, I have to say your points 2, 3 and 4 are all invalid. There is barely any noticeable fine-tuning done by .NET JIT and the generated machine code is noticeably inferior (comparable to unoptimized C++ machine code). The code generated 10 years ago in C will still be faster even today. And finally, there is no effect of any run-time metrics. Even after running or 'warming up' a function 100,000 times there is no change to the code by .NET JIT. Again, sorry for necroing, but I felt this had to be commented on. – Jorma Rebane Sep 23 '14 at 13:03
  • 2
    I'm very sorry, but 10 years have passed. Does anyone care to evaluate whether the claim about JIT-compiler progress improving the performance of old programs is true? – sigod Oct 11 '21 at 22:16
  • @sigod .NET JIT has received massive improvements over the years for the codegen it does. – Mike Marynowski Oct 07 '22 at 07:51
  • While JITted code potentially can be more performant, it seems that in practice most of the cpu constrained code is actually ahead-of-time compiled. I'm thinking about ML libs for example. I think the reason is that a JIT cannot afford to massively optimize functions, because it just takes too much time. – freakish Oct 24 '22 at 20:46
19

The ngen tool page spilled the beans (or at least provided a good comparison of native images versus JIT-compiled images). Executables that are compiled ahead-of-time typically have the following benefits:

  1. Native images load faster because they don't have much startup activities, and require a static amount of fewer memory (the memory required by the JIT compiler);
  2. Native images can share library code, while JIT-compiled images cannot.

Just-in-time compiled executables typically have the upper hand in these cases:

  1. Native images are larger than their bytecode counterpart;
  2. Native images must be regenerated whenever the original assembly or one of its dependencies is modified.

The need to regenerate an image that is ahead-of-time compiled every time one of its components is a huge disadvantage for native images. On the other hand, the fact that JIT-compiled images can't share library code can cause a serious memory hit. The operating system can load any native library at one physical location and share the immutable parts of it with every process that wants to use it, leading to significant memory savings, especially with system frameworks that virtually every program uses. (I imagine that this is somewhat offset by the fact that JIT-compiled programs only compile what they actually use.)

The general consideration of Microsoft on the matter is that large applications typically benefit from being compiled ahead-of-time, while small ones generally don't.

zneak
  • 134,922
  • 42
  • 253
  • 328
  • 1
    If the application requires performance, then AOT is definitely the way to go. Needing to recompile during development is not an issue either, since AOT should be done during deployment, during which interfaces no longer change. Bytecode images generate a lot more code between function calls, so AOT having one additional jump instruction is still faster than regular JIT code. However, if your application is tiny and performance isn't critical, then it doesn't really matter at all. – Jorma Rebane Sep 23 '14 at 13:09
7

Simple logic tell us that compiling huge MS Office size program even from byte-codes will simply take too much time. You'll end up with huge starting time and that will scare anyone off your product. Sure, you can precompile during installation but this also has consequences.

Another reason is that not all parts of application will be used. JIT will compile only those parts that user care about, leaving potentially 80% of code untouched, saving time and memory.

And finally, JIT compilation can apply optimizations that normal compilators can't. Like inlining virtual methods or parts of the methods with trace trees. Which, in theory, can make them faster.

vava
  • 24,851
  • 11
  • 64
  • 79
  • 3
    I don't get your first argument. Is it supposed to be against just-in-time compilation or against ahead-of-time compilation? As of the memory saves and time, the time wasted in compiling untouched program parts is wasted only once, so that isn't much of an arugment IMO. And it's not saving much memory if the whole virtual bytecode is loaded in memory anyways. – zneak Jan 21 '10 at 02:08
  • 6
    +1. There is a point is saying "if you do not run a subroutine over and over again, why do you need it to be fast (and spend time to make it fast)?". That is assuming, of course, that the JIT will make it really fast if you run it really frequently. This may not happen at the first round of compilation, but eventually... – Thilo Jan 21 '10 at 02:20
  • 4
    **Sure, you can precompile during installation but this also has consequences.** what are the consequences?? is it slow installation all installation is slow making it bit slower to have faster application didnt seems like consequences to me. – kirie Jun 22 '14 at 15:59
5
  1. Better reflection support. This could be done in principle in an ahead-of-time compiled program, but it almost never seems to happen in practice.

  2. Optimizations that can often only be figured out by observing the program dynamically. For example, inlining virtual functions, escape analysis to turn stack allocations into heap allocations, and lock coarsening.

dsimcha
  • 67,514
  • 53
  • 213
  • 334
  • 2
    It is a fact that your programs are compiled into native code before running. Therefore, reflection is obviously possible even for native programs, and that's more a matter of runtime than a matter of how you compile. As of escape analysis, I think it would be a terrible idea to base that on runtime observations rather than static code analysis. – zneak Jan 21 '10 at 02:11
4

Maybe it has to do with the modern approach to programming. You know, many years ago you would write your program on a sheet of paper, some other people would transform it into a stack of punched cards and feed into THE computer, and tomorrow morning you would get a crash dump on a roll of paper weighing half a pound. All that forced you to think a lot before writing the first line of code.

Those days are long gone. When using a scripting language such as PHP or JavaScript, you can test any change immediately. That's not the case with Java, though appservers give you hot deployment. So it is just very handy that Java programs can be compiled fast, as bytecode compilers are pretty straightforward.

But, there is no such thing as JIT-only languages. Ahead-of-time compilers have been available for Java for quite some time, and more recently Mono introduced it to CLR. In fact, MonoTouch is possible at all because of AOT compilation, as non-native apps are prohibited in Apple's app store.

Dmitry Leskov
  • 3,233
  • 1
  • 20
  • 17
  • 3
    Yes, of course most jitted languages also have ahead-of-time compilers. In fact, Mono didn't do anything new, as Microsoft had an utility called ngen (http://msdn.microsoft.com/en-us/library/6t9t5wcf(VS.80).aspx) that generates native code from a CLR executable for quite some time. The fact, however, is that most languages use their just-in-time variants whenever they can, and it is this behavior I'm trying to understand. – zneak Jan 21 '10 at 07:16
3

I have been trying to understand this as well because I saw that Google is moving towards replacing their Dalvik Virtual Machine (essentially another Java Virtual Machine like HotSpot) with Android Run Time (ART), which is a AOT compiler, but Java usually uses HotSpot, which is a JIT compiler. Apparently, ARM is ~ 2x faster than Dalvik... so I thought to myself "why doesn't Java use AOT as well?". Anyways, from what I can gather, the main difference is that JIT uses adaptive optimization during run time, which (for example) allows ONLY those parts of the bytecode that are being executed frequently to be compiled into native code; whereas AOT compiles the entire source code into native code, and code of a lesser amount runs faster than code of a greater amount.
I have to imagine that most Android apps are composed of a small amount of code, so on average it makes more sense to compile the entire source code to native code AOT and avoid the overhead associated from interpretation / optimization.

  • The major advantage of the Java bytecode in this situation is that it's portable. The application can be distributed as Java code and then compiled to whatever underlying architecture is used by the phone. Android supports MIPS, ARM and x86, so it makes sense to distribute Java applications and *then* compile them natively. In this scenario, the Java bytecode is used as a glorified intermediate representation. – zneak Mar 07 '14 at 17:42
  • I understand the purpose of Java / bytecode being portable. I was trying to help clarify the advantage of JIT vs. AOT being used... I believe it depends on the size of the application source code from the end-user perspective (i.e. smaller --> AOT, larger --> JIT). They're both portable to any end-user. Otherwise, why wouldn't all client compilers be AOT? – Scott Ferrell Mar 07 '14 at 22:09
  • You may want to look at GraalVM which can do this. – Thorbjørn Ravn Andersen Aug 28 '22 at 00:12
3

It seems that this idea has been implemented in Dart language:

https://hackernoon.com/why-flutter-uses-dart-dd635a054ebf

JIT compilation is used during development, using a compiler that is especially fast. Then, when an app is ready for release, it is compiled AOT. Consequently, with the help of advanced tooling and compilers, Dart can deliver the best of both worlds: extremely fast development cycles, and fast execution and startup times.

2

One advantage of JIT which I don't see listed here is the ability to inline/optimize across separate assemblies/dlls/jars (for simplicity I'm just going to use "assemblies" from here on out).

If your application references assemblies which might change after install (e. g. pre-installed libraries, framework libraries, plugins), then a "compile-on-install" model must refrain from inlining methods across assembly boundaries. Otherwise, when the referenced assembly is updated we would have to find all such inlined bits of code in referencing assemblies on the system and replace them with the updated code.

In a JIT model, we can freely inline across assemblies because we only care about generating valid machine code for a single run during which the underlying code isn't changing.

ChaseMedallion
  • 20,860
  • 17
  • 88
  • 152
  • 1
    This may even depend on runtime behavior rather than installation. Connect to a different database today, use a different driver jar, work on files rather than http urls today, run through entirely different code paths. Etc. – Holger Oct 06 '17 at 08:04
-1

The difference between platform-browser-dynamic and platform-browser is the way your angular app will be compiled. Using the dynamic platform makes angular sending the Just-in-Time compiler to the front-end as well as your application. Which means your application is being compiled on client-side. On the other hand, using platform-browser leads to an Ahead-of-Time pre-compiled version of your application being sent to the browser. Which usually means a significantly smaller package being sent to the browser. The angular2-documentation for bootstrapping at https://angular.io/docs/ts/latest/guide/ngmodule.html#!#bootstrap explains it in more detail.