The C++ standard provides div(int, int), but not udiv(unsigned int, unsigned int).
If I naively used unsigned ints in this function, I can see that this would yield the wrong result for integers greater than 2^31 - 1 in the numerator. For example (with 4-bit nibbles):
The largest 4-bit nibble is 15, 1111 in binary. As a signed nibble, this would represent -1. Dividing 15 by 2 yields 7, or 0111, but dividing -1 by 2 yields 0: 0000.
Is there a straightforward way to adapt div to unsigned integers, or am I better off writing my own udiv, or avoiding the use of div and div-like functions altogether?
Edit/Note: In my case, I'm using unsigned long long int
s, so using lldiv doesn't solve the problem.