Yes, exactly. It is used to give the compiler more information about the if
statement so that it can generate the optimal code according to the target micro-architecture
While each micro-architecture has its ways to be informed about the likelihood of a branch, we can take a simple example from the Intel Optimization manual
Assembly/Compiler Coding Rule 3. (M impact, H generality) Arrange code to be consistent with
the static branch prediction algorithm: make the fall-through code following a conditional branch be the
likely target for a branch with a forward target, and make the fall-through code following a conditional
branch be the unlikely target for a branch with a backward target.
Simply put, the static prediction for forward branches is not-taken (so the code after the branch is speculatively executed, it's the likely path) while for backward branches is taken (so the code after the branch is not speculatively executed).
Consider this code for GCC:
#define probably_true(x) __builtin_expect(!!(x), 1)
#define probably_false(x) __builtin_expect(!!(x), 0)
int foo(int a, int b, int c)
{
if (probably_true(a==2))
return a + b*c;
else
return a*b + 2*c;
}
Where I used the built-in __builtin_expect
to simulate a [[problably(true)]]
.
This get compiled into
foo(int, int, int):
cmp edi, 2 ;Compare a and 2
jne .L2 ;If not equals jumps to .L2
;This is the likely path (fall-through of a forward branch)
;return a + b*c;
.L2:
;This is the unlikely path (target of a forward branch)
;return a*b + 2*c;
ret
Where I spared you some assembly code.
If you replace the probably_true
with probably_false
the code becomes:
foo(int, int, int):
cmp edi, 2 ;Compare a and 2
je .L5 ;If equals jumps to .L5
;This is the likely path (fall-through of a forward branch)
;return a*b + 2*c;
.L5:
;This is the unlikely path (target of a forward branch)
;return a + b*c;
ret
You can play with with this example at codebolt.org.