Disclaimer: I don't have much experience with reverse-engineering byte-code, so please don't be too harsh on me if that could "easily" answer my question.
On modern processors, branching can be extremely expensive if prediction fails (see Why is it faster to process a sorted array than an unsorted array?).
Let's say I have some short-circuit evaluation in Java like this:
if (condition && (list!=null) && (list.size()>0)) /* Do something */ ;
Is that basically equivalent to a bunch of branches like this:
if (condition) {
if (list!=null) {
if (list.size()>0) {
// Do something
}
}
}
or does Java have some other way to do the short-circuiting more cleverly?
In other words, would it be better to avoid at least one branching by rewriting the line like this:
if ((condition & (list!=null)) && (list.size()>0)) /* Do something */ ;
since the simple list!=null
-check is much less expensive than a potentially ill-predicted branching?
(Clearly I can't get rid of the second &&
without risking a NullPointerException
.)
Now, before I get ripped to shreds with statements like "premature optimization is the root of all evil!", please keep in mind that this is a choice between general coding habits (always use short-circuiting vs. never use short-circuiting unless required), that will affect pretty much all of my code, so making sure that I use the right habit here is definitely worth spending some thought on.