I learned about this trick the other day by inspecting the machine code generated by gcc. Dividing an integer a
by a constant b
can be optimized as follows:
x = a / b
x = a * (1 / b)
x = (a * (((1 << n) / b) + 1)) >> n
The reciprocal can be evaluated at compile-time, resulting in a multiply-and-shift operation that is more efficient than division.
c = ((1 << n) / b) + 1
x = (a * c) >> n
Now this has the same effect as plain integer division, it truncates the result (rounding towards zero). Is it possible to modify this algorithm to round towards the nearest value instead?