Due to the pipelined nature of modern CPUs, new instructions begin to be processed before previous instructions have finished processing. The exact number varies depending on the CPU architecture and the type of instruction. The reason for pipelining is to make the CPU more efficient in utilisation of its components, allowing to improve throughput of instructions. For example, the circuitry designed to fetch the next instruction would lay idle for at least a few cycles whilst the previous instructions carries out its stages (things like source register read, data cache access, arithmetic execution, etc) without pipelining.
It introduces its own challenges though: one example is how the instruction fetch part should know which instruction to fetch next in the presence of a conditional jump instruction in the pipeline. The conditional jump (such as the one necessitated by your if
above) require the evaluation of a condition to determine which instruction to fetch next - however this evaluation happens several stages in the pipeline later. During its transition through the pipeline stages, the pipeline must keep going and new instructions must keep being loaded - otherwise you would lose efficiency in having to wait until the resolution of the condition is known (a pipeline stall: a condition CPUs try to avoid). Without knowing for sure where the next instructions should be coming from, the CPU has to guess: this is known as branch prediction. If it guesses correctly, the pipeline can keep going at full tilt after the condition has been evaluated and the target jump address confirmed. If it guesses wrong, the pipeline must be cleared of all instructions started after the conditional jump, and re-started from the correct target jump address: an expensive condition that efficient branch prediction algorithms try to minimize.
Applying to your example above: if branch prediction correctly guesses the outcome of condition()
a large percentage of the time, the following execution (of either doA()
or doB()
) continues without a pipeline flush: otherwise the conditional statement imposes a performance hit. This can occur if the evaluation of condition()
is random from call to call, or otherwise follows a pattern that the branch prediction algorithm finds hard to predict.