0

Is it faster, using coin flips, to generate random ints from 1 to 9, with single factor solving algorithms, or is it computationally faster with multi-factor solving algorithms?

Solving using one factorization, one would process solving attempts with four instances of f(2), for a four bit/process return, that has a function range of 2^4=16. For generating an int from 1 to 9 with f(2) per that method, 9 of the 16 results would be valid, & 7 possible returns per solving attempts would be solving attempt fail/retries. Hence, this single factorization method of generating rand ints from 1 to 9 has a 7/16=%44 solving attempt fail/retry rate for a four process function.

Using instead multiple factors algorithmmically, the solving attempt fail/retry rate may be reduced, comparatively?

First, two instances of f(2) could generate a random int from 1 to 4, accepting only returns from 1 to 3, with a %25 solving attempt fail/retry rate for a two process solving attempt function.

Next, another two instances of f(2) could generate another random int from 1 to 4, accepting only returns from 1 to 3, with another %25 solving attempt fail/retry rate for its two process solving attempt function.

The first rand int from 1 to 3 may be added to {3* ((the second rand int from 1 to 3 result)-1)} to yield a rand int from 1 to 9, that has a %25 fail/retry rate for solving attempts of its first two processes, followed by a %25 fail/retry rate for solving attempts of its next two processes.

The f(2) -> f(9) solving efficiency of the second proposed algorithm may be more efficient, because it uses multiple factors, instead of a single factor?

For discussion of creating a random number generator from a coin toss, see Creating a random number generator from a coin toss

Zabuzard
  • 25,064
  • 8
  • 58
  • 82

1 Answers1

1

The "multifactor" approach definitely requires fewer random bits, on average.

I'll give a full calculation below, but to see why at an intuitive level, consider the following:

  • In the single-factor approach, each set of four bits has a 9/16 chance of giving a valid result; if it fails, you have to start fresh.
  • In the two-factor approach, each set of four bits (= two pairs of two bits) has a 9/16 chance of giving a complete valid result (= two valid factors); but even if it fails, there's a chance that it at least made some progress that can later be reused.

So in a sense, the two-factor approach has two "paths to success":

  • A successful set of four bits.
  • A half-successful set of four bits and, later, another half-successful set.

whereas the single-factor approach has only the first path.


Now, for some math . . .

When a given trial has probability p of success, and all trials are independent, the expected number of trials until one succeeds is 1/p.

Furthermore, if the expected number of trials until one succeeds is N, then the expected number of trials until m have succeeded is mN.

Therefore:

  • when your trial is "get four random bits" and your definition of success is "the result is between 1 and 9", you can expect to need 16/9 trials, meaning 4·16/9 = 64/9 ≈ 7.11 random bits, before you have your result.
  • when your trial is "get two random bits", your definition of success is "the result is between 1 and 3", and you need to succeed twice, then you can expect to need 2·4/3 = 8/3 trials, meaning 2·8/3 = 16/3 ≈ 5.33 random bits, before you have your result.

Below is some runnable JavaScript that tries both approaches 10,000 times, and compares them in terms of both average-number-of-bits and number-of-trials-where-each-one-needed-fewer. (Of course, since it's calling Math.random() to get each bit, it's "unrealistic" in that it's asking for a bunch of random bits and then discarding most of them. I include it only as a proof-of-concept of the math, not as actual sample code for implementing the two approaches.)

alert((function () {

var totalNumBitsEverGotten = 0;

function getBit() {
  ++totalNumBitsEverGotten;
  return Math.random() < 0.5 ? 0 : 1;
}

function getNBits(n) {
  var result = 0;
  for (var i = 0; i < n; ++i) {
    result = 2 * result + getBit();
  }
  return result;
}

function getIntInRange(min, max) {
  if (min != 0) {
    return getIntInRange(0, max - min) + min;
  }
  var numBitsNeeded = 0;
  for (var tmp = max; tmp > 0; tmp >>= 1) {
    ++numBitsNeeded;
  }
  while (true) {
    var nBits = getNBits(numBitsNeeded);
    if (nBits <= max) {
      return nBits;
    }
  }
}

function countBitsToGet1To9_approach1() {
  var numBitsPreviouslyGotten = totalNumBitsEverGotten;
  getIntInRange(1, 9);
  return totalNumBitsEverGotten - numBitsPreviouslyGotten;
}

function countBitsToGet1To9_approach2() {
  var numBitsPreviouslyGotten = totalNumBitsEverGotten;
  getIntInRange(1, 3);
  getIntInRange(1, 3);
  return totalNumBitsEverGotten - numBitsPreviouslyGotten;
}

var NUM_TRIALS = 10000;

var approach1Sum = 0;
var approach2Sum = 0;

var approach1Wins = 0;
var approach2Wins = 0;

for (var i = 0; i < NUM_TRIALS; ++i) {
  var approach1 = countBitsToGet1To9_approach1();
  var approach2 = countBitsToGet1To9_approach2();

  approach1Sum += approach1;
  approach2Sum += approach2;

  if (approach1 < approach2) {
    ++approach1Wins;
  } else if (approach2 < approach1) {
    ++approach2Wins;
  }
}

return 'After ' + NUM_TRIALS + ' trials:\n' +
       '- Approach #1 average: ' + (approach1Sum / NUM_TRIALS) + ' bits.\n' +
       '- Approach #2 average: ' + (approach2Sum / NUM_TRIALS) + ' bits.\n' +
       '- Approach #1 was faster in ' + approach1Wins + ' trial(s).\n' +
       '- Approach #2 was faster in ' + approach2Wins + ' trial(s).\n' +
       '- The two approaches tied in ' + (NUM_TRIALS - approach1Wins - approach2Wins) + ' trial(s).\n';

})());
ruakh
  • 175,680
  • 26
  • 273
  • 307
  • Looks like a combinatorics question to me, & I'd need to study up on it to opinion, so for now I'll defer to your math, that compares an estimated 5.33 random bits to 7.11 bits, comparing the algorithm computation times. To my intuition, your numbers look like they may be right? Re your “but even if it fails, there's a chance that it at least made some progress that can later be reused”: I don’t know about this? I think that every solving attempt output needs to have an equal chance to obtain, for randomness? So per fails, no bits may be reused, & there must be a total retry? Thanks! – Nikh Beghzr Jul 01 '18 at 00:03
  • 1
    @NikhBeghzr: Since the "success" of each two-bit "trial" depends only on that trial, there's no need to impose a restriction like you describe; you can safely use the first two successful trials, regardless of whether they're adjacent. But if you *do* impose a restriction like that, then the two approaches in your question become equivalent -- the only difference is in how they map four-bit sequences to integers -- and they both require the same expected number of bits. – ruakh Jul 01 '18 at 00:08
  • I disagree they'd be equivalent, because I don't think the multi-factor approach would ever get so far as to process/calculate/try two unsuccessful two-bit trials: it would stop processing after a first two bit fail & retry? So sometimes the multi-factor algorithm needs only 2 bits to detect a fail to retry, not calculating an extra 2 bits, as compared to the single factor algorithm, that always needs to try 4 bits to detect fails or get successful returns? – Nikh Beghzr Jul 01 '18 at 01:19
  • @NikhBeghzr: If you're OK with discarding the first two bits but using the second two, then how come you're not OK with using the first two bits but discarding the second two? – ruakh Jul 01 '18 at 03:09
  • My comment/response is too long so I link it: https://plus.google.com/113366929351507812798/posts/cMMttwP9g67 – Nikh Beghzr Jul 01 '18 at 17:49