Is there any differences in code optimization done by same versions of: Oracle Java compiler Apache Java compiler IBM Java compiler OpenJDK Java compiler. If there is what code would demonstrate different optimizations? Or are they using same compiler? If there is no known optimization differences then where could I find resources on how to test compilers for different optimizations?

- 525,659
- 79
- 751
- 1,130

- 53
- 1
- 3
-
3BTW - as I know, the main optimizations are done in fact by JIT compiler, not the compiler itself – Eel Lee Oct 02 '13 at 08:08
-
@EelLee The JIT compiler is the only real compiler in Java. I don't know of any other language where the transformation from source code into bytecode is called "compilation". – Marko Topolnik Oct 02 '13 at 08:15
-
@MarkoTopolnik Seems to be widely used. I don't know why you think "compilation" would only be applicable for when the end result is native code. – Kayaman Oct 02 '13 at 08:23
-
I agree strongly with Eel Lee's remark. Hotspot is an excellent JIT compiler, which can use run-time information to make better optimising decisions. Plus it can even do illegal things (such as remove synchronisation or inline non-final methods) when it sees it's currently safe to do so, and back out those optimisations if the original form ever becomes necessary. In that context, I can't imagine compiler writers have much reason to focus on performance optimisations of bytecode - and it might even hurt performance after all (if it's harder to optimise at runtime). – Andrzej Doyle Oct 02 '13 at 08:28
-
@MarkoTopolnik - I think it's more like the matter of semantics, however I'm quite a beginner myself so I see no point of arguing :) – Eel Lee Oct 02 '13 at 09:02
-
@AndrzejDoyle - nice explication – Eel Lee Oct 02 '13 at 09:02
-
@EelLee My comment wasn't at all supposed to be argumentative---it is just a follow-up to your comment. My point is that calling it "compilation" brings in wrong assumptions such as OP's. The nature of the process is closer to the term "translation". Again, I do not dispute that "compilation" is what the process is actually called. – Marko Topolnik Oct 02 '13 at 10:52
-
@Marko Topolnik: Assuming the [Java processor](http://en.wikipedia.org/wiki/Java_processor) was a good idea, we would have to call it compilation, wouldn't we? – maaartinus Oct 02 '13 at 11:04
-
@maaartinus This is another way to show why it *wasn't* a good idea: bytecode is too high-level to be implemented directly in hardware with good performance. A compiler must do much more legwork to be useful. – Marko Topolnik Oct 02 '13 at 11:10
3 Answers
No, they do not use the same compiler. I can't comment much about the optimizations and stuffs, but here's an example how the compilers are different in their working.
public class Test {
public static void main(String[] args) {
int x = 1L; // <- this cannot compile
}
}
If you use the standard java compiler, it'll throw an compilation error and the class file won't be created.
But if you use the eclipse compiler for java ECJ, it'll not only throw the same compilation error, but will also create a class file(YES, a class file for an uncompilable code, which makes ECJ, I wouldn't say wrong, but a bit tricky), which looks something like this.
public static void main(String[] paramArrayOfString)
{
throw new Error("Unresolved compilation problem: \n\tType mismatch: cannot convert from long to int.\n");
}
Having said that, this is just between 2 compilers. Other compilers may have their own way of working.
P.S: I took this example from here.
-
That's a valid observation, but it's perhaps worth mentioning that ECJ isn't a "correct" compiler in that it deliberately violates some rules. In this example (as you state) the class cannot compile and yet ECJ generates bytecode anyway. The more interesting comparison is between compilers that generate different forms of legitimate/correct output given the same input. But that's a lot harder to give examples of. – Andrzej Doyle Oct 02 '13 at 08:24
-
@AndrzejDoyle - I agree with you on most of the cases, but I really don't feel like calling the ECJ an "incorrect compiler", just because it has deliberately violated some rules. This is my view though and everybody has their point of view and I respect yours! – Rahul Oct 02 '13 at 08:35
Is there any differences in code optimization done by same versions of: Oracle Java compiler Apache Java compiler IBM Java compiler OpenJDK Java compiler.
While compiler can be very different, the javac
does almost not optimisations. The main optimisation is constant inlining and this is specified in the JLS and thus standard (except for any bugs)
If there is what code would demonstrate different optimizations?
You can do this.
final String w = "world";
String a = "hello " + w;
String b = "hello world";
String c = w;
String d = "hello " + c;
System.out.prinlnt(a == b); // these are the same String
System.out.prinlnt(c == b); // these are NOT the same String
In the first case, the constant was inlined and the String concatenated at compile time. In the second case the concatenation was performed at runtime and a new String created.
Or are they using same compiler?
No, but 99% of optimisations are performed at runtime by the JIT so these are the same for a given version of JVM.
If there is no known optimization differences then where could I find resources on how to test compilers for different optimizations?
I would be surprised if there is one as this doesn't sound very useful. The problem is that the JIT optimises pre built templates of byte code and if you attempt to optimise the byte code you can end up confusing the JIT and having slower code. i.e. there is no way to evaluated an optimisation without considering the JVM it will be run on.

- 525,659
- 79
- 751
- 1,130
The only compilers that I have spent a great deal of time with are javac
(which, as others have pointed out, does very little in terms of eager optimization) and the Eclipse compiler.
While writing a Java decompiler, I have observed a few (often frustrating) differences in how Eclipse compiles code, but not many. Some of them could be considered optimizations. Among them:
The Eclipse compiler appears to perform at least some duplicate code analysis. If two (or more?) blocks of code both branch to separate but equivalent blocks of code, the equivalent target blocks may be flattened into a single block with multiple entry jumps. I have never seen
javac
perform this type of optimization; the equivalent blocks would always be emitted. All of the examples I can recall happened to occur inswitch
statements. This optimization reduces the method size (and therefore class file size), which may improve load and verification time. It may even result in improved performance in interpreted mode (particularly if the interpreter in question performs inlining), but I imagine such an improvement would be slight. I doubt it would make a difference once the method has been JIT compiled. It also makes decompilation more difficult (grrr).Basic blocks are often emitted in a completely different order from
javac
. This may simply be a side effect of the compiler's internal design, or it may be that the compiler is trying to optimize the code layout to reduce the number of jumps. This is the sort of optimization I would normally leave to the JIT, and that philosophy seems to work fine forjavac
.

- 25,075
- 57
- 69