30

We can easily get random floating point numbers within a desired range [X,Y) (note that X is inclusive and Y is exclusive) with the function listed below since Math.random() (and most pseudorandom number generators, AFAIK) produce numbers in [0,1):

function randomInRange(min, max) {
  return Math.random() * (max-min) + min;
}
// Notice that we can get "min" exactly but never "max".

How can we get a random number in a desired range inclusive to both bounds, i.e. [X,Y]?

I suppose we could "increment" our value from Math.random() (or equivalent) by "rolling" the bits of an IEE-754 floating point double precision to put the maximum possible value at 1.0 exactly but that seems like a pain to get right, especially in languages poorly suited for bit manipulation. Is there an easier way?

(As an aside, why do random number generators produce numbers in [0,1) instead of [0,1]?)

[Edit] Please note that I have no need for this and I am fully aware that the distinction is pedantic. Just being curious and hoping for some interesting answers. Feel free to vote to close if this question is inappropriate.

MD XF
  • 7,860
  • 7
  • 40
  • 71
maerics
  • 151,642
  • 46
  • 269
  • 291
  • 2
    Can you explain how having max inclusive will make any significant difference whatsoever? These are floating-point numbers. 0.499999999 should not be much different than 0.5. – Kendall Frey Mar 15 '12 at 16:55
  • @KendallFrey: Nope, I have no practical application for my question. I'm just curious. Feel free to vote to close if I'm just being silly. – maerics Mar 15 '12 at 16:57
  • See [http://stackoverflow.com/questions/363681/java-generating-random-number-in-a-range](http://stackoverflow.com/questions/363681/java-generating-random-number-in-a-range) – Thomas Mar 15 '12 at 16:58
  • 1
    To answer your "aside", most probably because they're generated from an integer PRNG (which will be in the range 0 -> 2^n-1), and then scaled to float. – Oliver Charlesworth Mar 15 '12 at 17:33

13 Answers13

15

I believe there is much better decision but this one should work :)

function randomInRange(min, max) {
  return Math.random() < 0.5 ? ((1-Math.random()) * (max-min) + min) : (Math.random() * (max-min) + min);
}
Alex L
  • 6,192
  • 1
  • 14
  • 8
6

First off, there's a problem in your code: Try randomInRange(0,5e-324) or just enter Math.random()*5e-324 in your browser's JavaScript console.

Even without overflow/underflow/denorms, it's difficult to reason reliably about floating point ops. After a bit of digging, I can find a counterexample:

>>> a=1.0
>>> b=2**-54
>>> rand=a-2*b
>>> a
1.0
>>> b
5.551115123125783e-17
>>> rand
0.9999999999999999
>>> (a-b)*rand+b
1.0

It's easier to explain why this happens with a=253 and b=0.5: 253-1 is the next representable number down. The default rounding mode ("round to nearest even") rounds 253-0.5 up (because 253 is "even" [LSB = 0] and 253-1 is "odd" [LSB = 1]), so you subtract b and get 253, multiply to get 253-1, and add b to get 253 again.


To answer your second question: Because the underlying PRNG almost always generates a random number in the interval [0,2n-1], i.e. it generates random bits. It's very easy to pick a suitable n (the bits of precision in your floating point representation) and divide by 2n and get a predictable distribution. Note that there are some numbers in [0,1) that you will will never generate using this method (anything in (0,2-53) with IEEE doubles).

It also means that you can do a[Math.floor(Math.random()*a.length)] and not worry about overflow (homework: In IEEE binary floating point, prove that b < 1 implies a*b < a for positive integer a).

The other nice thing is that you can think of each random output x as representing an interval [x,x+2-53) (the not-so-nice thing is that the average value returned is slightly less than 0.5). If you return in [0,1], do you return the endpoints with the same probability as everything else, or should they only have half the probability because they only represent half the interval as everything else?

To answer the simpler question of returning a number in [0,1], the method below effectively generates an integer [0,2n] (by generating an integer in [0,2n+1-1] and throwing it away if it's too big) and dividing by 2n:

function randominclusive() {
  // Generate a random "top bit". Is it set?
  while (Math.random() >= 0.5) {
    // Generate the rest of the random bits. Are they zero?
    // If so, then we've generated 2^n, and dividing by 2^n gives us 1.
    if (Math.random() == 0) { return 1.0; }
    // If not, generate a new random number.
  }
  // If the top bits are not set, just divide by 2^n.
  return Math.random();
}

The comments imply base 2, but I think the assumptions are thus:

  • 0 and 1 should be returned equiprobably (i.e. the Math.random() doesn't make use of the closer spacing of floating point numbers near 0).
  • Math.random() >= 0.5 with probability 1/2 (should be true for even bases)
  • The underlying PRNG is good enough that we can do this.

Note that random numbers are always generated in pairs: the one in the while (a) is always followed by either the one in the if or the one at the end (b). It's fairly easy to verify that it's sensible by considering a PRNG that returns either 0 or 0.5:

  • a=0   b=0  : return 0
  • a=0   b=0.5: return 0.5
  • a=0.5 b=0  : return 1
  • a=0.5 b=0.5: loop

Problems:

  • The assumptions might not be true. In particular, a common PRNG is to take the top 32 bits of a 48-bit LCG (Firefox and Java do this). To generate a double, you take 53 bits from two consecutive outputs and divide by 253, but some outputs are impossible (you can't generate 253 outputs with 48 bits of state!). I suspect some of them never return 0 (assuming single-threaded access), but I don't feel like checking Java's implementation right now.
  • Math.random() is twice for every potential output as a consequence of needing to get the extra bit, but this places more constraints on the PRNG (requiring us to reason about four consecutive outputs of the above LCG).
  • Math.random() is called on average about four times per output. A bit slow.
  • It throws away results deterministically (assuming single-threaded access), so is pretty much guaranteed to reduce the output space.
tc.
  • 33,468
  • 5
  • 78
  • 96
5

My solution to this problem has always been to use the following in place of your upper bound.

Math.nextAfter(upperBound,upperBound+1)

or

upperBound + Double.MIN_VALUE

So your code would look like this:

double myRandomNum = Math.random() * Math.nextAfter(upperBound,upperBound+1) + lowerBound;

or

double myRandomNum = Math.random() * (upperBound + Double.MIN_VALUE) + lowerBound;

This simply increments your upper bound by the smallest double (Double.MIN_VALUE) so that your upper bound will be included as a possibility in the random calculation.

This is a good way to go about it because it does not skew the probabilities in favor of any one number.

The only case this wouldn't work is where your upper bound is equal to Double.MAX_VALUE

Eric Hotinger
  • 8,957
  • 5
  • 36
  • 43
MetaMilo
  • 51
  • 1
  • 1
4

Just pick your half-open interval slightly bigger, so that your chosen closed interval is a subset. Then, keep generating the random variable until it lands in said closed interval.

Example: If you want something uniform in [3,8], then repeatedly regenerate a uniform random variable in [3,9) until it happens to land in [3,8].

function randomInRangeInclusive(min,max) {
 var ret;
 for (;;) {
  ret = min + ( Math.random() * (max-min) * 1.1 );
  if ( ret <= max ) { break; }
 }
 return ret;
}

Note: The amount of times you generate the half-open R.V. is random and potentially infinite, but you can make the expected number of calls otherwise as close to 1 as you like, and I don't think there exists a solution that doesn't potentially call infinitely many times.

Berry
  • 151
  • 3
  • It works mathematically, but in the floating-point world there is no guarantee that this ever returns `max`. – tc. Mar 16 '12 at 01:58
1

Generating a "uniform" floating-point number in a range is non-trivial. For example, the common practice of multiplying or dividing a random integer by a constant, or by scaling a "uniform" floating-point number to the desired range, have the disadvantage that not all numbers a floating-point format can represent in the range can be covered this way, and may have subtle bias problems. These problems are discussed in detail in "Generating Random Floating-Point Numbers by Dividing Integers: a Case Study" by F. Goualard.

Just to show how non-trivial the problem is, the following pseudocode generates a random "uniform-behaving" floating-point number in the closed interval [lo, hi], where the number is of the form FPSign * FPSignificand * FPRADIX^FPExponent. The pseudocode below was reproduced from my section on floating-point number generation. Note that it works for any precision and any base (including binary and decimal) of floating-point numbers.

METHOD RNDRANGE(lo, hi)
  losgn = FPSign(lo)
  hisgn = FPSign(hi)
  loexp = FPExponent(lo)
  hiexp = FPExponent(hi)
  losig = FPSignificand(lo)
  hisig = FPSignificand(hi)
  if lo > hi: return error
  if losgn == 1 and hisgn == -1: return error
  if losgn == -1 and hisgn == 1
    // Straddles negative and positive ranges
    // NOTE: Changes negative zero to positive
    mabs = max(abs(lo),abs(hi))
    while true
       ret=RNDRANGE(0, mabs)
       neg=RNDINT(1)
       if neg==0: ret=-ret
       if ret>=lo and ret<=hi: return ret
    end
  end
  if lo == hi: return lo
  if losgn == -1
    // Negative range
    return -RNDRANGE(abs(lo), abs(hi))
  end
  // Positive range
  expdiff=hiexp-loexp
  if loexp==hiexp
    // Exponents are the same
    // NOTE: Automatically handles
    // subnormals
    s=RNDINTRANGE(losig, hisig)
    return s*1.0*pow(FPRADIX, loexp)
  end
  while true
    ex=hiexp
    while ex>MINEXP
      v=RNDINTEXC(FPRADIX)
      if v==0: ex=ex-1
      else: break
    end
    s=0
    if ex==MINEXP
      // Has FPPRECISION or fewer digits
      // and so can be normal or subnormal
      s=RNDINTEXC(pow(FPRADIX,FPPRECISION))
    else if FPRADIX != 2
      // Has FPPRECISION digits
      s=RNDINTEXCRANGE(
        pow(FPRADIX,FPPRECISION-1),
        pow(FPRADIX,FPPRECISION))
    else
      // Has FPPRECISION digits (bits), the highest
      // of which is always 1 because it's the
      // only nonzero bit
      sm=pow(FPRADIX,FPPRECISION-1)
      s=RNDINTEXC(sm)+sm
    end
    ret=s*1.0*pow(FPRADIX, ex)
    if ret>=lo and ret<=hi: return ret
  end
END METHOD
Peter O.
  • 32,158
  • 14
  • 82
  • 96
1

Given the "extremely large" number of values between 0 and 1, does it really matter? The chances of actually hitting 1 are tiny, so it's very unlikely to make a significant difference to anything you're doing.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
0
private static double random(double min, double max) {
    final double r = Math.random();
    return (r >= 0.5d ? 1.5d - r : r) * (max - min) + min;
}
ivan.a.bovin
  • 1,133
  • 10
  • 17
0

I am fairly less experienced, So I am also looking for solutions as well.

This is my rough thought:

Random number generators produce numbers in [0,1) instead of [0,1],

Because [0,1) is an unit length that can be followed by [1,2) and so on without overlapping.

For random[x, y], You can do this:

float randomInclusive(x, y){

    float MIN = smallest_value_above_zero;
    float result;
    do{
        result = random(x, (y + MIN));
    } while(result > y);
    return result;
}

Where all values in [x, y] has the same possibility to be picked, and you can reach y now.

Peter O.
  • 32,158
  • 14
  • 82
  • 96
0

Math.round() will help to include the bound value. If you have 0 <= value < 1 (1 is exclusive), then Math.round(value * 100) / 100 returns 0 <= value <= 1 (1 is inclusive). A note here is that the value now has only 2 digits in its decimal place. If you want 3 digits, try Math.round(value * 1000) / 1000 and so on. The following function has one more parameter, that is the number of digits in decimal place - I called as precision:

function randomInRange(min, max, precision) {
    return Math.round(Math.random() * Math.pow(10, precision)) /
            Math.pow(10, precision) * (max - min) + min;
}
Thach Van
  • 1,381
  • 16
  • 19
0

How about this?

function randomInRange(min, max){
    var n = Math.random() * (max - min + 0.1) + min;
    return n > max ? randomInRange(min, max) : n;
}

If you get stack overflow on this I'll buy you a present.

-- EDIT: never mind about the present. I got wild with:

randomInRange(0, 0.0000000000000000001)

and got stack overflow.

Aaron Plocharczyk
  • 2,776
  • 2
  • 7
  • 15
0

The question is akin to asking, what is the floating point number right before 1.0? There is such a floating point number, but it is one in 2^24 (for an IEEE float) or one in 2^53 (for a double).

The difference is negligible in practice.

wallyk
  • 56,922
  • 16
  • 83
  • 148
  • @OliCharlesworth: Which one is *sweeping*? – wallyk Mar 15 '12 at 17:35
  • "The difference is negligible in practice". One could probably concoct a scenario where the difference was important. – Oliver Charlesworth Mar 15 '12 at 17:37
  • 1
    @OliCharlesworth Actually, you probably couldn't. If you're sensitive to the difference between `1` and the next-smallest floating-point number, then your results will be overrun by issues with the precision of floating-point number, and won't represent anything meaningful until you use something with higher precision (at which point you're no longer sensitive to the difference between `1` and the next-smallest float). – Aaron Dufour Mar 15 '12 at 18:49
  • Actually it's more like 1/2**53 ;) – tc. Mar 16 '12 at 01:59
  • @tc: Yep, you are quite correct. I wrote 2^56 faster than I thought about it, and forgot that the exponent is 11 bits, not 8 as for an IEEE-754 shortreal. I've fixed it, though it makes no difference to my point. – wallyk Mar 16 '12 at 02:49
0

What would be a situation where you would NEED a floating point value to be inclusive of the upper bound? For integers I understand, but for a float, the difference between between inclusive and exclusive is what like 1.0e-32.

0

Think of it this way. If you imagine that floating-point numbers have arbitrary precision, the chances of getting exactly min are zero. So are the chances of getting max. I'll let you draw your own conclusion on that.

This 'problem' is equivalent to getting a random point on the real line between 0 and 1. There is no 'inclusive' and 'exclusive'.

Kendall Frey
  • 43,130
  • 20
  • 110
  • 148
  • And the chances of getting any specific value are zero, too, so summed up you won't get any result? – Bergi Mar 15 '12 at 17:11
  • Well, there are an (uncountably) infinite number of real numbers, so `0 * infinity = undefined`. In actuality, the result is 1. This is because the 0 in the above equation is determined by `1(sum of probabilities) / infinity(number of possibilitis) = 0(probability of any one number)`. Using that logic, you simplify `0 * inf = 1 / inf * inf = 1`. – Kendall Frey Mar 15 '12 at 17:16
  • 1
    The problem with this argument is that floating-point numbers *don't* have arbitrary precision! In any real-world experiment, you could demonstrate the difference statistically. – Oliver Charlesworth Mar 15 '12 at 17:29
  • In any real-world experiment, it would take you too many CPU-years to come up with a significant sample of random numbers. – Kendall Frey Mar 15 '12 at 17:32
  • @KendallFrey: There are only 2^32 different single-precision floating-point numbers (a reasonably proportion of which aren't in the range [0,1)); 2 billion iterations isn't **that** long. (But for double-precision, you're probably safe.) – Oliver Charlesworth Mar 15 '12 at 17:38
  • I do agree with you on the single-precision. I assumed double-precision since the question mentioned 'an IEE-754 floating point double precision'. – Kendall Frey Mar 15 '12 at 17:40