I have an application where I need to measure how many bits of randomness an algorithm consumes. I have instrumented a subclass of Random
to do this by overriding Random.next(int)
to increment a counter before calling its parent's method.
I have run into some issues with the implementation of the nextInt(int)
method, because it will always draw 32 bits even when the range is a power of two. For other ranges there are even more problems: the method is not uniform---it only retries once for the case where the original value drawn is greater than the greatest multiple of the range lower than Integer.MAX_VALUE
--- and it still uses more bits of randomness than are needed.
How can I implement a better version of nextInt(int)
that uses only the bare minimum number of random bits needed to determine a value within the range, while being perfectly uniform? It does not need guaranteed termination (not possible anyway), just termination with probability 1.
Edit:
Here is what I have so far:
int nextInt(int max){
int n = Integer.numberOfTrailingZeros(max);
return next(n) + nextOddInteger(max >> n) << n;
}
This might not exactly be correct, but essentially this factors out all n
twos from num
, generates a random number with n
bits, and prepends nextOddInteger(num)
to the resulting bits. nextOddInteger
would generate a random integer up to a number whose prime factorization contains no twos. How can I implement this part in a very randomness-efficient way?