1

In Enterprise applications where the same code runs for days without ever getting restarted and if the code is hit multiple times i.e. more than the threshold time then it will be jit compiled anyway(most part of it) so i want to ask why it is not compiled in the first place.. i mean jvm engineers can compile the code to byte code to maintain platform independence and do one more compilation to machine code and shouldn't that machine code be faster in general case and when it meets the requirement of being jit compiled then jvm can enhance the machine code with all the profiling information and statistics and do all its enhancement. Surely that would take up the compilation time but general code instead of being interpreted every time is executed simply.i.e create a compiler which compiles and a boast up on it in case some method becomes hot. i might be wrong here but this is a curious question.

1arpit1
  • 63
  • 4
  • Because virtualization has its benefits, such as for example programming by reflection. Also, as to the efficiency questions, see http://stackoverflow.com/questions/2426091/what-are-the-differences-between-a-just-in-time-compiler-and-an-interpreter – Piotr Wilkin Aug 23 '16 at 18:37
  • i am asking why do we need interpretation when we can compile ..we can first compile to byte code and the jvm can compile at the very moment so it won't need to interpret anything. i mean they can do virtualization and do compilation at the start of program..though it will raise startup cost of a program – 1arpit1 Aug 23 '16 at 18:43
  • Well, how do you think Java would handle reflection (looking into an object's fields at runtime), runtime proxying (changing the executed code at runtime) and other similar functionality? – Piotr Wilkin Aug 23 '16 at 18:44
  • previous comment was a very silly comment i realize it now..but it can have some alternative like saving all interpreted code and running it next time..i mean it can save all the interpreted code as machine code and run that from 2nd time onwards.. i am amateur programmer so i can be wrong ..and for reflection yes you are right. i will have to lookup though how only interpretation can benefit reflection and not compilation – 1arpit1 Aug 23 '16 at 18:47
  • It already does that (cache'ing repeatedly executed code). See the answer that I linked. – Piotr Wilkin Aug 23 '16 at 18:51
  • http://cs.stackexchange.com/questions/29589/what-properties-of-a-programming-language-make-compilation-impossible/29591 this can help also.. @PiotrWilkin – 1arpit1 Aug 23 '16 at 18:54
  • If you have an actual performance problem I would suggest investigating it in detail instead of just following your gut and blaming the mere existence of the interpreter. – the8472 Aug 24 '16 at 08:12
  • Because, for code below the threshold, “compiling + executing” is actually slower than just interpreting. Even “loading precompiled + executing” can be slower than just interpreting. – Holger Aug 25 '16 at 10:40
  • @the8472 i am not really blaming interpreter..i had a curious question and i don't know much about interpreters and compilers so that's why. – 1arpit1 Aug 28 '16 at 14:03

3 Answers3

1

Compiling with optimizations is very expensive. Look at compile times of large C projects (e.g. firefox, linux kernel), especially with link time optimization.

The JITs also compile for the target platform, i.e. they try to compile with all available instructions that they can support, which means you cannot distribute compiled code.

Now consider that the JITs perform speculative optimizations (based on profiling) which may turn out wrong and need to bail out. If only compiling were an option this code would not be able to continue to run until it is recompiled. With an interpreter it can continue executing the uncommon code path that caused the bailout.

You also have to keep in mind that some optimizations are workload-specific, i.e. a (bad) test workload might exercise different code paths than the real workload, and thus benefit from being compiled differently after it has been profiled at runtime.

And not all applications are long-running daemons. Some things spin up JVMs to execute a single task which then exits when it is done.

Also consider that a lot of code only runs once, e.g. during application startup or shutdown.

All these factors contribute to why some JVMs use a combination interpreters + compilers by default. Others may only use AOT-compiled code or only use an interpreter due to different technical tradeoffs, but they're generally not faster.

the8472
  • 40,999
  • 5
  • 70
  • 122
0

It is a mixture of multiple factors and design choices.

Java is delivered as bytecode, and no permanent artifacts are platform dependent. That is a design choice to ensure platform independence. Android made a different choice, mainly because of the more restricted platforms it runs on.

Interpreting code is faster than compiling it and running it once. So to have the best performance with decent startup times. It is efficient to start a compile when you are sure you need the code. The compilation is done in another thread and the binary used as soon as it is ready. Hotspot even does multiple repeated compilations when the code is used often enough. It can use actual dynamic runtime characteristics, so the binary can be faster than any code only having static information.

Kedar Mhaswade
  • 4,535
  • 2
  • 25
  • 34
k5_
  • 5,450
  • 2
  • 19
  • 27
0

Java runs most code interpreted because most code runs fast enough interpreted, and there is no need to incur the overhead of native compilation. The JIT (HotSpot) engine will optimize highly-used code, where the payoff justifies the effort. Furthermore, it will optimize in context, meaning that even if, say, a variable might change theoretically, it won't during a particular sequence of commands, so that sequence can put it in a register or hold it as a constant. When the assumption is violated, then the JIT puts it back into interpreted mode. Compilation ahead of time would lose all the advantage of runtime insight. Java's way actually produces code that is tighter and more efficient than compiled languages such as C++ can achieve.

Lew Bloch
  • 3,364
  • 1
  • 16
  • 10