I got hold on an SUPER-FAST algorithm that generates an array of random bytes, uniformly. It's 6 times faster than c++ uniform distribution and mersenne-twister of std library.
The count of an array is divisible by 4, so it can be interpreted as array of integers. Casting each entry to an integer, produces values in the range [INT_MIN, INT_MAX]
. But how can I transform these integer values to lie between my own [min, maximum]
?
I want to avoid any if-else, to avoid branching.
Maybe I should apply some bitwise logic, to discard irrelevant bits in each number? (because all remaining, unmasked bits will be either 0 or 1 anyway). If I can extract the most significant bit in my maximum-value, I could mask any bits that are more significant than that one, in my integers.
For example, if I want my max
to be 17, then it is 00010001
in binary form. Maybe my mask would then look as 00011111
? I could then apply it to all numbers in my array.
But, this mask is wrong ...It actually allows values up to (1+2+4+8+16)
:(
What can I do? Also, how to take care of the min
?
Edit
I am generating millions of numbers every frame of my application, for neural networks. I managed to vectorize the code using AXV2 for float variants (using this post), but need to get integers working too.