2

eg:- If the given number is 10 we have to return 7 (as it the nearest smaller prime number)

The way I could think of is this:-
Mainloop: Test whether the given number is prime or not (by applying a primality test),
If it's prime then return the number else decrement the number by 1 and goto Mainloop.

But I have to work on long long int range and its taking a lot of time.

Is there a better approach to it, also if I should go with the above way only then which primality test should I use?

greybeard
  • 2,249
  • 8
  • 30
  • 66
alpha_century
  • 25
  • 1
  • 4
  • You can shave off half of your tests by only checking odd numbers. – Joseph Mansfield May 06 '13 at 17:35
  • 4
    Try the [Sieve of Eratosthenes](http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes). – Morwenn May 06 '13 at 17:35
  • 1
    You need a faster prime test then. If it's so slow, you're probably using trial division? Use for example the Baillie-Pomerance-Selfridge-Wagstaff test. – Daniel Fischer May 06 '13 at 17:36
  • 1
    possible duplicate of [Algorithm to find largest prime number smaller than x](http://stackoverflow.com/questions/6741947/algorithm-to-find-largest-prime-number-smaller-than-x) and see [Find a largest prime number less than n](http://stackoverflow.com/q/16375681) and [Prime number just below a number](http://stackoverflow.com/q/16387319), – James Waldby - jwpat7 May 06 '13 at 18:30
  • (7 is not *the nearest smaller prime number* from 7.) – greybeard Sep 10 '22 at 11:58

6 Answers6

4

If the size of your inputs is bounded, lookup in a table of precomputed primes will probably be the fastest.

fortran
  • 74,053
  • 25
  • 135
  • 175
  • 1
    "But I have to work on long long int range" <- that would make a huuuge lookup table. There are - give or take - something like 2.4*10^16 primes below 10^18. – Daniel Fischer May 06 '13 at 17:45
  • 1
    +1 lookup table should be part of the solution almost no matter what the other details are. – Grady Player May 06 '13 at 17:52
  • but now that I think twice, as the distribution of the prime numbers becomes more and more sparse; if all the inputs are large, the lower part of the table (which is more densely populated) is not really needed; so it could be trimmed to a more manageable size – fortran May 07 '13 at 09:20
4

Here is a pseudocode implementation of the Baillie-Wagstaff Pseudoprimality Test that Daniel Fischer mentioned in his comment. We begin with a simple Sieve of Eratosthenes that we will need later.

function primes(n)
    ps := []
    sieve := makeArray(2..n, True)
    for p from 2 to n step 1
        if sieve(p)
            ps.append(p)
            for i from p * p to n step p
                sieve[i] := False
    return ps

The powerMod function raises the base b to the exponent e with all calculations done modulo m; it is much faster than performing the exponentiation first, then taking the modulus of the result, because the intermediate calculation will be huge.

function powerMod(b, e, m)
    x := 1
    while e > 0
        if e % 2 == 1
            x := (b * x) % m
        b := (b * b) % m
        e := floor(e / 2)
    return x

The jacobi function from number theory tells if a is a quadratic residue mod p.

function jacobi(a, p)
    a := a % p
    t := 1
    while a != 0
        while a % 2 == 0
            a := a / 2
            if p % 8 == 3 or p % 8 == 5
                t := -t
        a, p := p , a # swap
        if a % 4 == 3 and p % 4 == 3
            t := -t
        a := a % p
    if p == 1 return t else return 0

Gary Miller's strong pseudoprime test is based on the Little Theorem of Pierre de Fermat, which states that if p is a prime number, then for any a != 0, a ^ (p - 1) == 1 (mod p). Miller's test is somewhat stronger than Fermat's because it can't be fooled by Carmichael Numbers.

function isStrongPseudoprime(n, a)
    d := n - 1; s := 0
    while d % 2 == 0
        d := d / 2; s := s + 1
    t = powerMod(a, d, n)
    if t == 1 return ProbablyPrime
    while s > 0
        if t == n - 1 return ProbablyPrime
        t := (t * t) % n; s := s - 1
    return Composite

The Miller-Rabin test performs k strong-pseudoprime tests, where k is typically somewhere between 10 and 25. The strong pseudoprime test can be fooled, but if you perform enough of them, the likelihood of being fooled is very small.

function isPrime(n) # Miller-Rabin
    for i from 1 to k
        a := randInt(2 .. n-1)
        if not isStrongPseudoprime(n, a)
            return Composite
    return ProbablyPrime

That primality test is sufficient for most purposes, and fast enough. But if you want something a little bit stronger and a little bit faster, a test based on Lucas chains can be used. Here is the calculation of the Lucas chain.

function chain(n, u, v, u2, v2, d, q, m)
    k := q
    while m > 0
        u2 := (u2 * v2) % n; v2 := (v2 * v2 - 2 * q) % n
        q := (q * q) % n
        if m % 2 == 1
            t1 := u2 * v; t2 := u * v2
            t3 := v2 * v; t4 := u2 * u * d
            u, v := t1 + t2, t3 + t4
            if u % 2 == 1 u := u + n
            if v % 2 == 1 v := v + n
            u, v, k := (u / 2) % n, (v / 2) % n), (q * k) % n
        m := floor(m / 2)
    return u, v, k

It is common initialize the Lucas chain using an algorithm due to John Selfridge.

function selfridge(n)
    d, s := 5, 1; ds := d * s
    repeat
        if gcd(ds, n) > 1 return ds, 0, 0
        if jacobi(ds, n) == 1 return ds, 1, (1 - ds) / 4
        d, s := d + 2, s * -1; ds := d * s

Then a Lucas pseudoprime test determines if a number is prime or probably composite. Like the Fermat test, it comes in two flavors, both standard and strong, and like the Fermat test it can be fooled, though with the Fermat test the error is that a composite number could be improperly reported prime but with the Lucas test the error is that a prime number could be improperly reported composite.

function isLucasPseudoprime(n) # standard
    d, p, q := selfridge(n)
    if p == 0 return n == d
    u, v, k := chain(n, 0, 2, 1, p, d, q, (n + 1) / 2)
    return u == 0

function isLucasPseudoprime(n) # strong
    d, p, q := selfridge(n)
    if p == 0 return n == d
    s, t := 0, n + 1
    while t % 2 == 0
        s, t := s + 1, t / 2
    u, v, k := chain(n, 1, p, 1, p, d, q, t // 2
    if u == 0 or v == 0 return Prime
    r := 1
    while r < s
        v := (v * v - 2 * k) % n; k := (K * k) % n
        if v == 0 return Prime
    return ProbablyComposite

Then the Baillie-Wagstaff test is simple. First check if the input is less than 2 or is a perfect square (check if the square root is an integer). Then trial division by the primes less than 100 finds most composites quickly, and finally a strong pseudoprime test to base 2 (some people add a strong pseudoprime test to base 3, to be extra sure) followed by a Lucas pseudoprime test makes the final determination.

function isPrime(n) # Baillie-Wagstaff
    if n < 2 or isSquare(n) return False
    for p in primes(100)
        if n % p == 0 return n == p
    return isStrongPseudoprime(n, 2) \
       and isLucasPseudoprime(n) # standard or strong

The Baillie-Wagstaff test has no known errors.

Once you have a good primality test, you can find the largest prime less than n by counting down from n, stopping at the first prime number.

If you are interested in programming with prime numbers, I modestly recommend this essay at my blog, or many of the other blog entries related to prime numbers, which you can find by using the search function at the blog.

user448810
  • 17,381
  • 4
  • 34
  • 59
3

In addition to above also note that Bertrand's postulate states that there always exists at least one prime number p where n<p<2n-2. So that gives you an upper bound.

frickskit
  • 624
  • 1
  • 8
  • 19
  • 3
    If you would combine this with the prime sieves method, the highest prime you should find should be lower than 1.5*n-1 (since if n would be the upper bound than 0.5*n+1 should be the lower bound, which has a distance of 0.5*n-1 from n). So you should repeat the prime sieves method until floor(sqrt(1.5*n-1)). – fibonatic May 06 '13 at 18:16
2

Look into Miller-Rabin primality test. This is probabilistic, but if you do it for several hundred times this can almost guarantee the precision within long long range.

Also, if you can use Java, BigInteger.isProbablePrime can help. C\C++ does not seem to have a built in function for testing primality.

zw324
  • 26,764
  • 16
  • 85
  • 118
0

Very interesting task you have!

I decided to implement for you from scratch quite advanced, large but fast (efficient) solution in pure C++ (it needs C++20 standard when compiling).

Following supplementary algorithms were used in my code - Sieve of Eratosthenes, Fermat Primality Test, Miller Rabin Primality Test, Trial Division (using primes), Binary Search.

Also external C++ library Boost Multiprecision was used to implement big integer arithmetics. You may use other libraries for that if you want, my code is made in such a way that any library can be used. Also CLang/GCC compilers have __int128 type, you can use it also instead of big-integers, if your task is only within 128-bit range.

My code supports any bit-size integers, but I thoroughly tested correctness and speed till 1024 bits (i.e. tested 8, 16, 32, 64, 128, 256, 512, 1024 bits).

All computations where possible I did using multi-core, if you have 8 cores (actually hardware-threads) then in all computations I start a pool of 8 threads and use them. But this multi-threaded approach is used only for integers bigger than 128 bits, this is a kind of heuristics enabled by default, but if needed you can force single threaded or multi-threaded algorithm for any bit-size, by setting extra parameter to main function PrevPrime().

My main algorithm (PrevPrime() function) does following steps:

  1. One time (single time per whole program run) I compute a table of primes smaller than 2^20, using Sieve of Eratosthenes. This table is enough for up to 1024-bit integers. If you use 2048 bits or bigger than more primes can be used.

  2. When we're given N, I start going from N downwards and again using Sieve of Eratosthenes to filter out composite numbers from range (N - length, N]. Not all 2^20 primes are used, experimentally I computed which amount of primes is optimal for each input N bit-size (for 32-bit 100 primes is enough, for 64-bit 500 primes, for 128-bit - 1200 primes, for 256-bit - 7000 primes, for 512-bit - 17500 primes, for 1024-bit - 75000 primes).

  3. Sieve of Eratosthenes needs to start sieving for each specific prime p from some offset N - off where off = N % p. Because division is most expensive out of elementary operations in CPU, I did this computation of offsets using multi-threaded (multi-core) approach (for bigger integers, for smaller single thread is used).

  4. After sieving is done we know for sure which numbers in range (N - length, N] are composite, but we're unsure about the rest if they are primes or composite. Hence last step is running Miller-Rabin's Primality Test for each remaining numbers. These tests are also done using multi-threaded approach (for bigger integers only).

Timings of PrevPrime() are following on my old and slow 1.2 GHz CPU which has 8 hardware threads, different timings for different bit sizes of integers:

32-bit - less than 0.00005 sec per one PrevPrime() integer, 64-bit - 0.0005 sec, 128-bit - 0.002 sec, 256-bit - 0.012 sec, 512-bit - 0.07-0.1 sec, 1024-bit - 0.55-1.3 sec.

So you can see that on my slow 1.2 GHz CPU 1024-bit integer outputs previous prime within just 1 second.

Timings above depend not only bit sizes of integers, but also on density of prime numbers. It is well known that for numbers of bit size B prime numbers appear on average once in ln(2^B). Hence bigger numbers not only have more difficult mathematics of big integers, but also have more rare prime numbers, hence bigger numbers need more numbers to be checked before outputting previous prime number.

To see how efficient is sieving in PrevPrime(), I provide timings for Fermat and Miller-Rabin tests themselves (F below is Fermat, MR is Miller-Rabin):

128-bit - composite MR & F - 0.0002 sec , prime MR - 0.0026 sec F - 0.0054 sec ; 256-bit - composite MR & F - 0.0007 sec , prime MR - 0.0116 sec F - 0.0234 sec ; 512-bit - composite MR & F - 0.0043 sec , prime MR - 0.0773 sec F - 0.1558 sec ; 1024-bit - composite MR & F - 0.0290 sec , prime MR - 0.4518 sec F - 0.9085 sec

From statistics above we can conclude that Fermat test takes twice more time than Miller-Rabin when targeting same probability of failure.

As you can see single Miller-Rabin for prime is about 0.5 second for 1024-bit, and whole PrevPrime() runs within 1 second, it means PrevPrime() is so efficient that it is just twice slower than single testing of prime number using Miller-Rabin.

Of course as you could notice I used probablistic primality tests, because 1024-bit number deterministic test takes really-really bigger time.

For probabilistic tests I've chosen target probability of failure equal to 1/2^32 = 1/4294967296, so it is very very rare that primality test will fail. You may choose smaller probability, it is tweakable in my program, if you need more precision.

From theory, each single test of Fermat gives 1/2 chance of failure (excluding Carmichael Numbers that have almost 100% chance of failure), while Miller-Rabin gives 1/4 chance of failure. Hence to reach target probability 1/2^32 one needs 32 steps of Fermat and 16 steps of Miller-Rabin.

Threshold of amount of primes needed for each bit-size is automatically tweakable, but needs a lot of single-time pre-computation, that's why I computed table and hard-coded it, but code that calculates table still is there in my code. If you need to re-compute thresholds then go to function ComputePrevPrimeOptimalPrimesCnt() and comment out first line (with return bits <= 32 ? 128 : .....) and run program, it will output sizes optimal for your PC and bit-sizes. But my pre-calculated table should be good for any CPU, only new bit-sizes need to be tweaked.

If you don't know the only library that I depend on is Boost. It can be easily installed under Linux through sudo apt install libboost-dev-all. Under Windows a bit more difficult, but still can be installed through Chocolatey's Boost Package (do choco install boost-msvc-14.3, but install Chocolatey first).

By default when program is run, it runs all Tests, tests are all functions in code that start with macro TEST(TestName) { ...... If you need extra tests, just write a body like TEST(NewTest) { ...code here... }, no extra registration of test is needed. If you're sure you can also remove all TEST() functions when using my library, they are not needed for functioning, except that they useful for quite thoroughly testing my library.

Important Notice!!!. This code I wrote within 1 single day, so it is not very well tested and may contain some bugs somewhere. Hence straight away using it in production is not suggested without extra attentional looking through code and/or extra testing. It is more like educational code, that can be a kind of guidance for you writing your own production-ready code.

Code goes below.

Try it online!

(Note, Try it online! online run is limited to 128 bits (out of 1024 possible), due to GodBolt server limiting program total run-time to 3 seconds, change test_till_bits = 128 to 1024 after downloading).

SOURCE CODE IS HERE. Hosting on Github Gist Source Code. Unfortunatelly due to StackOverflow post limit of 30K symbols, I can't inline full source code here, because only my source code has 30K bytes, not counting long English description above (which is extra 10K). Hence I'm sharing source code on Github GIST (link above), and GodBolt server (Try it online! link above), both links have full source code.

Code console output:

'Test GenPrimes' 0.006 sec
'Test GenPrimes_CntPrimes' 0.047 sec
'Test MillerRabin' 0.006 sec
Fermat_vs_MillerRabin: bits    8 prp_cnt  58 mr_time 0.0000|0.0000 f_time 0.0000|0.0000, total time 0.009 sec
Fermat_vs_MillerRabin: bits   16 prp_cnt  57 mr_time 0.0000|0.0000 f_time 0.0000|0.0000, total time 0.001 sec
Fermat_vs_MillerRabin: bits   32 prp_cnt  46 mr_time 0.0000|0.0000 f_time 0.0000|0.0000, total time 0.004 sec
Fermat_vs_MillerRabin: bits   64 prp_cnt  22 mr_time 0.0000|0.0005 f_time 0.0000|0.0011, total time 0.041 sec
Fermat_vs_MillerRabin: bits  128 prp_cnt  13 mr_time 0.0002|0.0028 f_time 0.0002|0.0056, total time 0.134 sec
Fermat_vs_MillerRabin: bits  256 prp_cnt   6 mr_time 0.0008|0.0120 f_time 0.0008|0.0245, total time 0.321 sec
Fermat_vs_MillerRabin: bits  512 prp_cnt   1 mr_time 0.0042|0.0618 f_time 0.0042|0.1242, total time 0.857 sec
Fermat_vs_MillerRabin: bits 1024 prp_cnt   1 mr_time 0.0273|0.4101 f_time 0.0273|0.8232, total time 3.807 sec
'Test IsProbablyPrime_Fermat_vs_MillerRabin' 5.176 sec
'Test PrevPrime_ReCheckWith_PrimesDiv_and_Fermat' 5.425 sec
PrevPrime    8-bit threads:1  0.0000 sec, avg distance    3.3, total time 0.001 sec
PrevPrime    8-bit threads:8  0.0000 sec, avg distance    3.3, total time 0.000 sec
PrevPrime   16-bit threads:1  0.0000 sec, avg distance   11.2, total time 0.000 sec
PrevPrime   16-bit threads:8  0.0000 sec, avg distance   11.2, total time 0.000 sec
PrevPrime   32-bit threads:1  0.0001 sec, avg distance   10.5, total time 0.001 sec
PrevPrime   32-bit threads:8  0.0001 sec, avg distance   10.5, total time 0.001 sec
PrevPrime   64-bit threads:1  0.0014 sec, avg distance   39.7, total time 0.025 sec
PrevPrime   64-bit threads:8  0.0012 sec, avg distance   39.7, total time 0.023 sec
PrevPrime  128-bit threads:1  0.0110 sec, avg distance   85.2, total time 0.170 sec
PrevPrime  128-bit threads:8  0.0084 sec, avg distance   85.2, total time 0.142 sec
PrevPrime  256-bit threads:1  0.0452 sec, avg distance  207.4, total time 0.570 sec
PrevPrime  256-bit threads:8  0.0331 sec, avg distance  207.4, total time 0.473 sec
PrevPrime  512-bit threads:1  0.1748 sec, avg distance  154.0, total time 2.246 sec
PrevPrime  512-bit threads:8  0.1429 sec, avg distance  154.0, total time 2.027 sec
PrevPrime 1024-bit threads:1  2.1249 sec, avg distance  379.4, total time 15.401 sec
PrevPrime 1024-bit threads:8  1.6030 sec, avg distance  379.4, total time 12.778 sec
'Test PrevPrime' 33.862 sec
'Program run time' 44.524 sec
Arty
  • 14,883
  • 6
  • 36
  • 69
-1

looks like you are working on this problem.

as @Ziyao Wei said, you could simply use Miller-Rabin primality test to resolve it.

and here is my solution

#include<cstdio>
#include<cstdlib>
#include<ctime>

short T;
unsigned long long n;

inline unsigned long long multi_mod(const unsigned long long &a,unsigned long long b,const unsigned long long &n)
{
    unsigned long long exp(a%n),tmp(0);
    while(b)
    {
        if(b&1)
        {
            tmp+=exp;
            if(tmp>n)
                tmp-=n;
        }
        exp<<=1;
        if(exp>n)
            exp-=n;
        b>>=1;
    }
    return tmp;
}

inline unsigned long long exp_mod(unsigned long long a,unsigned long long b,const unsigned long long &c)
{
    unsigned long long tmp(1);
    while(b)
    {
        if(b&1)
            tmp=multi_mod(tmp,a,c);
        a=multi_mod(a,a,c);
        b>>=1;
    }
    return tmp;
}

inline bool miller_rabbin(const unsigned long long &n,short T)
{
    if(n==2)
        return true;
    if(n<2 || !(n&1))
        return false;
    unsigned long long a,u(n-1),x,y;
    short t(0),i;
    while(!(u&1))
    {
        ++t;
        u>>=1;
    }
    while(T--)
    {
        a=rand()%(n-1)+1;
        x=exp_mod(a,u,n);
        for(i=0;i<t;++i)
        {
            y=multi_mod(x,x,n);
            if(y==1 && x!=1 && x!=n-1)
                return false;
            x=y;
        }
        if(y!=1)
            return false;
    }
    return true;
}

int main()
{
    srand(time(NULL));
    scanf("%hd",&T);
    while(T--)
    {
        for(scanf("%llu",&n);!miller_rabbin(n,20);--n);
        printf("%llu\n",n);
    }
    return 0;
}
Community
  • 1
  • 1