1

Is there any real advantage from Bytecode JIT execution over native code beside possible implementation of platform independency?

Apparently languages that use "virtual machines" over Bytecode JIT execution are attributed several advantages. But in what extend would this really affect a discussion concerning advantages/disadvantages of native code and JIT execution?

Here's a list of attributes that I identify, the question is in what extend does this also apply to native code - if the compiler supports it...

Security

VM runtime environments can observe the running application for e.g. buffer overflows.

  • So first question is if this is done by the "runtime environment", means e.g. by the class library or during JIT execution?

  • Memory bounds-checking exist also for native code compilers. Any other/general restrictions here?

Optimization

Classical optimization should be possible in native code compilers, too. See LLVM which in fact uses the generated bitcode to run optimization on, before compiling to native code.

  • Maybe there would be something like a dynamic optimization in JIT by e.g. identifying things to optimize related to the execution context. Could maybe be possible for a native compiler, too, to generate some code to optimize the execution during runtime. But dont't know if something like this is implemented.

  • Popular VM implementations do this - the question is if this really excuses a real advantages over native code.

Threading

I don't cound this while threading also in a VM is dependent on the native Thread implementation of the OS.

If we identify that there is no real advantage over native code and that there is always a runtime drawback in JIT... than this leads to the next question:

Does an operating system design based on JIT execution (e.g. Singularity, Cosmos, ...) make sense?

I could maybe identify one advantages: An OS with this design needs no MMU. Means there is no process seperation that makes use of the MMU, but a seperation between objects/components in software. But is it worth it?

Best regards ;-)

Seki
  • 11,135
  • 7
  • 46
  • 70
  • As for dynamic optimization by a native compiler: If the AOT compiler "generates some code to optimize the execution during runtime", you already have a JIT. And that JIT will sure as hell use a more high-level description of the program, because that's far easier to optimize. Have you tried inlining virtual calls given nothing but machine code (e.g. no information about classes)? –  Jan 04 '12 at 13:50
  • Good objection.. So dynamic optimization and also GC are should be added to my list above. Think to answer it in detail it would need detailed measuring on current implementations... –  Jan 04 '12 at 22:05

1 Answers1

1

Theoretically, they could take advantage of the platform/CPU they run on, to compile faster code.

In practice, I personally haven't come across any case in which that actually happens.

But there's also other issues. Higher-level languages that compile into bytecode also happen to have garbage collection, which is very useful in some domains. It's not because of the JIT compiler per se, but having a JITter makes it a lot easier practically because often the language is easier for a JITter to analyze and figure out e.g. where pointers go on the stack, etc.

user541686
  • 205,094
  • 128
  • 528
  • 886
  • CLR JIT compilers actually generate different code depending on the CPU, since several years: http://stackoverflow.com/questions/2405343/does-the-net-clr-really-optimize-for-the-current-processor –  Jan 04 '12 at 13:46
  • @delnan: I said *faster* code, not merely *different* code. I have personally never seen JITter optimize code better than a native compiler, but maybe that's because of ignorance on my part, idk. – user541686 Jan 04 '12 at 13:50
  • @delnan: Oh I read it, all right -- but all it says is "we take advantage of ", without providing any examples (e.g. something comparable in C# vs C++). To me, there isn't a realistic difference between "we take advantage of " and "we could take advantage of " -- I need to see a practical example before I (myself) can claim it's advantageous. At the moment, I haven't seen any, but again, maybe I've just been blind to them... – user541686 Jan 04 '12 at 13:55
  • If you require performance improvements visible to the plain eye, you're probably out of luck. This kind of optimizations yields only *tiny* improvements in any case (in AOT compilers too - when was the last time the GCC or LLVM team could proudly announce a general 10% performance improvement due to changes in the instruction selection?). But I'm interested: Did you ever perform a micro-benchmark to check for such things, or are you expecting a web service written in Java with half a second network latency to run 25% faster due to such optimizations? –  Jan 04 '12 at 14:01
  • @delnan: Well, I personally don't really care about micro-optimizations -- if you can't notice them, then it wouldn't make a difference if they didn't exist. Generally, I haven't noticed comparable C# code running any faster than C++ code. But I'm not stupid either (yes, I/O-bound != CPU-bound, yada-yada), so I haven't tried your latter suggestion. – user541686 Jan 04 '12 at 14:05
  • It's fine if you don't care about such optimizations yourself. The HPC guys on the other hand might be grateful about 2% less overhead in their main loop, especially if they don't have to do anything for it except updating their language implementation. And either way, "it does not exist" and "it does not matter to me" are *entirely* different things. You can answer "There are such optimizations but they don't matter to most people", but "Such optimizations do not exist" is. (Unless you want to accuse a few people of lying, that it.) –  Jan 04 '12 at 14:08
  • @delnan: Well, it's *neither* -- I haven't seen any examples illustrating a practical difference, so my answer of *"I personally haven't come across any case in which that actually happens"* is pretty darn accurate, and more accurate than "it does not exist" or "I don't care". – user541686 Jan 04 '12 at 14:10
  • @delnan: It seems like you repeatedly missed the ***"I personally haven't"*** at the beginning, which I've repeated a bunch of times. I ***still*** haven't seen any actual examples, after all these discussions, whether they matter or not. – user541686 Jan 04 '12 at 14:11