27

I know that Java's HotSpot JIT will sometimes skip JIT compiling a method if it expects the overhead of compilation to be lower than the overhead of running the method in interpreted mode. Does the .NET CLR work based upon a similar heuristic?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
jsight
  • 27,819
  • 25
  • 107
  • 140

4 Answers4

44

Note: this answer is on a "per-run" context. The code is normally JITted each time you run the program. Using ngen or .NET Native changes that story, too...

Unlike HotSpot, the CLR JIT always compiles exactly once per run. It never interprets, and it never recompiles with heavier optimisation than before based on actual usage.

This may change, of course, but it's been that way since v1 and I don't expect it to change any time soon.

The advantage is that it makes the JIT a lot simpler - there's no need to consider "old" code which is already running, undo optimisations based on premises which are no longer valid etc.

One point in .NET's favour is that most CLR languages make methods non-virtual by default, which means a lot more inlining can be done. HotSpot can inline a method until it's first overridden at which point it undoes the optimisation (or does some clever stuff in some cases to conditionally still use the inlined code, based on actual type). With fewer virtual methods to worry about, .NET can largely ignore the pain of not being able to inline anything virtual.

EDIT: The above describes the desktop framework. The Compact Framework throws out native code when it wants to, JITting again as necessary. However, this still isn't like HotSpots adaptive optimisation.

The micro framework doesn't JIT at all apparently, interpreting the code instead. This makes sense for very constrained devices. (I can't say I know much about the micro framework.)

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • I see... so the only time that the .Net CLR runs in interpreted mode is if the JIT is turned off completely (for debugging). – jsight Aug 10 '09 at 16:23
  • 4
    I don't believe it actually interprets the code even then - the debugger is able to step through the compiled code appropriately, that's all. Mono *does* have an interpreter though. – Jon Skeet Aug 10 '09 at 16:30
  • 2
    Curiously, .net MF (MicroFramework for embedded devices) does interpret IL instead of compiling it. – Dejan Stanič Aug 10 '09 at 16:35
  • 5
    The desktop CLR doesn't pitch (unload) jitted code, so it compiles a method only once. The CLR in the .NET Compact Framework can pitch code and re-jit it when it's needed again as an adaptation to the more resource-constrained environments that the compact CLR runs in. – Curt Nichols Aug 10 '09 at 16:38
  • Oops - yes, will edit my too-general answer when I get the chance. Thanks for the corrections. – Jon Skeet Aug 10 '09 at 17:15
  • Oh, I see... I had misunderstood this: http://stackoverflow.com/questions/279582/switching-off-the-net-jit-compiler-optimisations (and other similar things) -- Apparently, it only turns off some optimizations in the JIT. Thanks for the explanation! – jsight Aug 11 '09 at 14:21
  • Doesn't the CLR JIT compiles on every run? It is not exactly once since when you close and reopen your program it will be jitted again. I would say exactly once for every run, it would be more clear – EProgrammerNotFound Jun 13 '15 at 04:01
  • @EProgrammerNotFound: See if my edit (in the first couple of paragraphs) is enough. Thanks for raising this. – Jon Skeet Jun 13 '15 at 06:07
  • Hi Jon, does your answer consider application domains, code access security and is it still true for newer .NET frameworks like 4.0, 4.5 and 4.6 which were not available at the time of writing this answer? I tried to ask a new question but it got closed as duplicate http://stackoverflow.com/questions/42262201/does-the-net-clr-jit-compile-every-method-only-once-per-run – Thomas Weller Feb 16 '17 at 07:06
  • @ThomasWeller: I honestly don't know the details there - although I'd *expect* new AppDomains to have new assemblies loaded (other than neutral assemblies) and be re-jitted. I believe there are at least moves around caching native code, but I don't know much in the way of details, I'm afraid. – Jon Skeet Feb 16 '17 at 07:49
15

The .NET runtime always compiles code JIT before execution. So, it is never interpreted.

You can find some more interesting reading in CLR Design Choices with Anders Hejlsberg. Especially the part:

I read that Microsoft decided that IL will always be compiled, never interpreted. How does encoding type information in instructions help interpreters run more efficiently?

Anders Hejlsberg: If an interpreter can just blindly do what the instructions say without needing to track what's at the top of the stack, it can go faster. When it sees an iadd, for example, the interpreter doesn't first have to figure out which kind of add it is, it knows it's an integer add. Assuming someone has already verified that the stack looks correct, it's safe to cut some time there, and you care about that for an interpreter. In our case, though, we never intended to target an interpreted scenario with the CLR. We intended to always JIT [Just-in-time compile], and for the purposes of the JIT, we needed to track the type information anyway. Since we already have the type information, it doesn't actually buy us anything to put it in the instructions.

Bill Venners: Many modern JVMs [Java virtual machines] do adaptive optimization, where they start by interpreting bytecodes. They profile the app as it runs to find the 10% to 20% of the code that is executed 80% to 90% of the time, then they compile that to native. They don't necessarily just-in-time compile those bytecodes, though. A method's bytecodes can still be executed by the interpreter as they are being compiled to native and optimized in the background. When native code is ready, it can replace the bytecodes. By not targeting an interpreted scenario, have you completely ruled out that approach to execution in a CLR?

Anders Hejlsberg: No, we haven't completely ruled that out. We can still interpret. We're just not optimized for interpreting. We're not optimized for writing that highest performance interpreter that will only ever interpret. I don't think anyone does that any more. For a set top box 10 years ago, that might have been interesting. But it's no longer interesting. JIT technologies have gotten to the point where you can have multiple possible JIT strategies. You can even imagine using a fast JIT that just rips quickly, and then when we discover that we're executing a particular method all the time, using another JIT that spends a little more time and does a better job of optimizing. There's so much more you can do JIT-wise.

Community
  • 1
  • 1
Dejan Stanič
  • 787
  • 7
  • 14
3

It will be nice to see some trace-based JITs in the future for devices with low memory. It would mainly interpret, find hot spots, and convert those into assembler and cache those. I think this is what Google does with their Android JIT and Microsoft Research has a research project ongoing for trace-based JIT.

I found an article, SPUR: A Trace-Based JIT Compiler for CIL.. Maybe some of this will make it into the CLR one day?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
AbdElRaheim
  • 1,384
  • 6
  • 8
0

I don't believe so, and I don't think that it ever should.

How could the JIT know how many times a particular method would be called? Wouldn't the frequency of interpretation factor into the decision?

I would also question how well a JIT compiler would be able to analyze a function to determine whether or not interpretation would be best without interpreting the function itself. And given that fact (that at least one pass of the method has taken place) wouldn't it be better to simply compile each method to reduce the overhead of trying to determine which methods get compiled in the first place?

Andrew Hare
  • 344,730
  • 71
  • 640
  • 635
  • @Andrew - Runtime performance metrics can tell you whether something should be jitted, as well as how aggressively it should be jitted (b/c some JIT optimizations are more time consuming than others). – jsight Aug 10 '09 at 16:24
  • Just an idea (I doubt that HotSpot works like this) - you could analyse some stuff (like method's length) statically (while generating bytecode) and make decision than, and then, just mark this method with "don't compile" bit, so JIT would know that it shouldn't compile it, when needed, instead, fall back to interpreter. – Marcin Deptuła Aug 10 '09 at 16:26
  • 3
    http://java.sun.com/products/hotspot/whitepaper.html - HotSpot can replace code in place, even while that code is executing on the stack. Therefore, a loop can begin in interpreted mode, but complete in JIT compiled mode once recompilation is complete. – jsight Aug 10 '09 at 17:29