Consider the following loops:
while((expressionA) & (expressionB)){
// do something
}
while((expressionA) && (expressionB)){
// do something
}
where expressionA
and expressionB
are expressions of type bool
and expressionB
has no side-effects. Under these conditions, the two cases are as-if-equivalent (right?).
A (hypothetical) compiler that naively takes its cue from the source code would put a branch in the &&
version and we would end up paying for branch prediction failures.
With a modern compiler (such as current GCC), can there ever be any conditions under which the &
version gives a substantial performance gain over of the &&
version?
My guess is no, because:
- If
expressionB
is sufficiently cheap, the compiler will recognize this and avoid creating the short-circuiting branch. - If
expressionB
is sufficiently expensive, the compiler will create the short-circuit because:- if probability of
expressionA
is not close to 1.0, we get a substantial average performance gain from short-circuiting. - if probability of
expressionA
is close to 1.0, we won't pay much because branch prediction will tend to succeed.
- if probability of