Background:
Taking the modulo by a power of 2 of an integer is much faster than by a non-power of 2 due to the binary representation of numbers. All that is needed is to mask away the higher order bits which takes a single and
instruction.
Example in C:
unsigned x;
unsigned xmod16 = x&15; // Fast
Vs.
unsigned x;
unsigned xmod17 = x%17; // Slow
Question:
Floating point numbers are represented differently than integers. But they are still based on a binary significand and exponent. Is there any way to exploit the fact that floating point is binary to optimize modulo by power of 2 operations on floating point?
float x;
float xmod17 = fmodf(x, 17); // Slow
float xmod16 = fmodf(x, 16); // Also Slow
Vs.
union BitCast {
float f;
unsigned i;
} x;
float xmod17 = fmodf(x.f, 17); // Slow
// Magic
float xmod16; // Fast
There are many different floating point representations out there but for the purpose of this question I am referring specifically to the relatively standard IEEE 754 format.
If there is a clever yet portable trick that would be awesome but less portable solutions that apply specifically to IEEE 754 are welcome too.