I saw some C++ code:
float randfloat(float a, float b)
{
uint32_t val = 0x3F800000 | (rng.getU32() >> 9);
float fval = *(float *)(&val);
return a + ((fval - 1.0f) * (b - a));
}
The complete code is on a Github gist. It seems to first bitwise OR the binary number 111111100000000000000000000000
with an unsigned random integer that is shifted right by 9 positions.
And then the bits are treated as a float, and made into a float? I would have guessed that it returns a float that is a random number between a
and b
, with a
inclusive and b
exclusive. So some other parts of the code uses randfloat(0.9, 0.85)
and I think it is "almost" the same as randfloat(0.85, 0.9)
, except the first case is exclusive of 0.85
, while the second case is exclusive of 0.9
.
Does somebody know what is going on with the part 0x3F800000 | (rng.getU32() >> 9)
and making it a float -- is it IEEE 754?