First code for correctness, then for clarity (the two are often connected, of course!). Finally, and only if you have real empirical evidence that you actually need to, you can look at optimizing. Premature optimization really is evil. Optimization almost always costs you time, clarity, maintainability. You'd better be sure you're buying something worthwhile with that.
y = (x>0)*value1 + (x <= 0)*value2;
Don't use it in any of your code. This is a good example on how to write terrible
code because it is not intuitive at all. Also, whether you will get any performance gain or not, depends on your machine architecture (depends upon number of cycles taken by multiplication instruction
of your architecture).
However, the conditional statements in C and C++ (e.g. if else
), at the very lowest level (in the hardware), are expensive. In order to understand why, you have to understand how pipelines work. It can lead to pipeline flushes and decreasing the efficiency of the processor.
The Linux kernel
uses optimization techniques for conditional statements and it is __builtin_expect
. When working with conditional statements (if-else statements), we often know which branch is true and which is not (most probable). If compiler knows this information in advance, it can generate most optimized code.
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
if (likely(x > 0)) {
y = value1;
} else {
y = value2;
}
For above example, i have marked if
condition as likely()
true, so compiler will put true code immediately after branch, and false code within the branch instruction. In this way compiler can achieve optimization. But don’t use likely()
and unlikely()
macros blindly. If prediction is correct, it means there is zero cycle of jump instruction, but if prediction is wrong, then it will take several cycles, because processor needs to flush it’s pipeline which is worst than no prediction.