The standard use case for BigInteger.isProbablePrime(int)
is in cryptography. Specifically, certain cryptographic algorithms, such as RSA, require randomly chosen large primes. Importantly, however, these algorithms don't really require these numbers to be guaranteed to be prime — they just need to be prime with a very high probability.
How high is very high? Well, in a crypto application, one would typically call .isProbablePrime()
with an argument somewhere between 128 and 256. Thus, the probability of a non-prime number passing such a test is less than one in 2128 or 2256.
Let's put that in perspective: if you had 10 billion computers, each generating 10 billion probable prime numbers per second (which would mean less than one clock cycle per number on any modern CPU), and the primality of those numbers was tested with .isProbablePrime(128)
, you would, on average, expect one non-prime number to slip in once in every 100 billion years.
That is, that would be the case, if those 10 billion computers could somehow all run for hundreds of billions of years without experiencing any hardware failures. In practice, though, it's a lot more likely for a random cosmic ray to strike your computer at just the right time and place to flip the return value of .isProbablePrime(128)
from false to true, without causing any other detectable effects, than it is for a non-prime number to actually pass the probabilistic primality test at that certainty level.
Of course, the same risk of random cosmic rays and other hardware faults also applies to deterministic primality tests like AKS. Thus, in practice, even these tests have a (very small) baseline false positive rate due to random hardware failures (not to mention all other possible sources of errors, such as implementation bugs).
Since it's easy to push the intrinsic false positive rate of the Miller–Rabin primality test used by .isProbablePrime()
far below this baseline rate, simply by repeating the test sufficiently many times, and since, even repeated so many times, the Miller–Rabin test is still much faster in practice than the best known deterministic primality tests like AKS, it remains the standard primality test for cryptographic applications.
(Besides, even if you happened to accidentally select a strong pseudoprime as one of the factors of your RSA modulus, it would not generally lead to a catastrophic failure. Typically, such pseudoprimes would be products of two (or rarely more) primes of approximately half the length, which means that you'd end up with a multi-prime RSA key. As long as none of the factors were too small (and if they were, the primality test should've caught them), the RSA algorithm will still work just fine, and the key, although somewhat weaker against certain types of attacks than normal RSA keys of the same length, should still be reasonably secure if you didn't needlessly skimp on the key length.)