Clear code over (naive) micro-optimizations
You are essentially making wrong assumptions about actual compiler's behavior. In both cases, that is:
if (num < 0 || num > 1) { ...
and
if (num != 0 && num != 1) { ...
an optimizing compiler will reduce it anyway into shortest form. You may see that, both generate the same assembly, that might look as (x86 platform):
cmp $0x1,%eax
jbe 1e <foo+0x1e> # jump if below or equal
This is already fast enough, as cmp
instruction on all major architectures has latency of one cycle.
The bottom line is to choose whatever code, that makes your intent clear for you, future maintainers and let the compiler do its job. Just make sure, that you set it with proper optimization level (e.g. -O2
or better).
Aid branch prediction
However, if performance is really crucial here (and you profiled it as so, don't you?), then you could think about another kind of optimization, that is at branch prediction level (assuming that your CPU has support for it). The GCC has __builtin_expect
intrinsic, that allows to hint compiler, that in most cases branch will be taken or not.
You may use __builtin_expect
to provide the compiler with branch
prediction information. In general, you should prefer to use actual
profile feedback for this (-fprofile-arcs), as programmers are
notoriously bad at predicting how their programs actually perform.
However, there are applications in which this data is hard to collect.
For instance, if you are confident, that function takes 0
or 1
in aproximately 99% number of cases, then you could write it as:
#define unlikely(x) __builtin_expect((x), 0)
if (unlikely(num != 0 && num != 1)) { ...