6

Generation of prime number is simple but what is the fastest way to find it and generate( prime numbers) it recursively ?

Here is my solution. However, it is not the best way. I think it is O(N*sqrt(N)). Please correct me, if I am wrong.

    public static boolean isPrime(int n) {
        if (n < 2) {
            return false;
        } else if (n % 2 == 0 & n != 2) {
            return false;
        } else {
            return isPrime(n, (int) Math.sqrt(n));
        }
    }

    private static boolean isPrime(int n, int i) {
        if (i < 2) {
            return true;
        } else if (n % i == 0) {
            return false;
        } else {
            return isPrime(n, --i);
        }
    }

   public static void generatePrimes(int n){
       if(n < 2) {
            return ;
       } else if(isPrime(n)) {
            System.out.println(n);
       } 

       generatePrimes(--n);

   }

   public static void main(String[] args) {

        generatePrimes(200);
   }
  • You're testing for primality, not generating primes. – Matthew Flaschen Dec 29 '10 at 05:40
  • 1
    you can use sieve of Eratosthenes if you want to generate prime numbers up to n. – algo-geeks Dec 29 '10 at 11:31
  • not sure if your interviewer wants best asomtotic time, or just best time, but you can get a local optimization by not sending even numbers into function at all. Just check that 200 is even, subtract one and then each time call generatePrimes(n-=2). – kralco626 Dec 29 '10 at 12:05
  • but that really doesnt buy you much since your checking to see if it's even in the isPrime method anyhow... but i'm just saying... you can do some contant time work, and cut your calls to the generatePrimes function in half. – kralco626 Dec 29 '10 at 12:06
  • This seems like a really stupid interview question. – Nick Johnson Sep 29 '11 at 01:13

5 Answers5

4

In mathematics, the sieve of Atkin is a fast, modern algorithm for finding all prime numbers up to a specified integer.

Wikipedia article (contains pseudocode)

To address doing this recursively, perhaps the Sieve of Eratosthenes can be implemented recursively. This page might be helpful, as it appears to discuss a recursive implementation.

Kyle
  • 2,822
  • 2
  • 19
  • 24
  • 1
    but in the question it is mentioned that : to find it and generate( prime numbers) it recursively –  Dec 29 '10 at 05:46
  • Perhaps my edit addresses the recursive requirement; I'm not sure. – Kyle Dec 29 '10 at 06:14
  • I just edited it again to include a link to a page at CMU's site that might be helpful. – Kyle Dec 29 '10 at 06:23
  • +1 for the interesting CMU link, -1 for suggesting sieve of Atkin (sorry :) ) - its *theoretical* complexity might be better but it's very hard to implement right (the pseudocode in WP article is bogus and it says so on its talk page) and those that tried are saying (here, on SO) that the constant factors involved make it still slower that properly *wheel-erized* Sieve of Eratosthenes, for 32-bit range of numbers for sure. – Will Ness Oct 16 '14 at 20:13
  • @WillNess: Agreed. I'm not aware of _any_ range of numbers on which the Atkin-Bernstein sieve outperforms a suitably-optimized SoE in practice. primegen performs very poorly beyond $2^{32}$, and I don't know of any other serious implementations. (Google finds plenty of toy implementations.) – Charles Feb 16 '15 at 21:21
3

For recurrsion, You should use memoization to improve your recursive function, means if you finding prime number save it in array, and in call to isPrime(n) first check the number exists in array if not call to isPrime(n, (int) Math.sqrt(n)). also if isPrime(n,i) returns true, add it to prime list, it's better your array be sorted to do binary search, in C# there is sorted list, and binary search operation [making list of n item takes O(n log n) and searching is O(log(n))] i didn't know about java [but you can implement it].

Edit: your current approach is O(n sqrt(n)) but with my approch it can be in same order! but better performance, in fact the order is O(n sqrt(n) / log (n) + n log(n/log(n))) and because log(n) is smaller then n^Epsilon, it's better to say it's O(n sqrt(n)) but as you can see it will run log(n) time faster.

Also it's better do i-2 not i-- and some extra check in startup to run algorithm 2*log(n) time faster.

yurisich
  • 6,991
  • 7
  • 42
  • 63
Saeed Amiri
  • 22,252
  • 5
  • 45
  • 83
  • There is no way to generate primes < `n` in `O(n)`. Memoization does not help here. What you're describing is a good optimization, but it's not memoization and it's not `O(n)`, it's still `O(n*sqrt(n))` because primality testing will still be `O(sqrt n)`. – IVlad Dec 29 '10 at 11:04
  • @lVald, In fact my approach is O(n) in average, (like Qsort is n log n) because there is a finite number of call to IsPrime(n,i) I think it's constant factor (really no it's not constant but for example log log log log n is constant in current PCs), and memoization used widly because the approach is isPrime(n, i - 2) so with hi probability number is checked unless is prime (1/log n) – Saeed Amiri Dec 29 '10 at 11:10
  • @lVald, I'll edit my answer to say more exact time. – Saeed Amiri Dec 29 '10 at 11:16
  • IVlad returns :) so there will be answer in 10 minutes :) –  Dec 29 '10 at 11:33
  • @Saeed - I am really not sure what you're describing. Can you post pseudocode? I don't see where `sqrt(n) * log(n)` comes from. @hilal - you already have a good answer given by @Kyle S. Look up the sieve of Eratosthenes on google. If you managed to translate the classical algorithm into a recursive implementation you'll have no problem doing it for the sieve. The sieve runs in `O(n log log n)`. – IVlad Dec 29 '10 at 11:42
  • @lVald, Say what is ambiguis to explain it, if you talking about sqrt(n) * log(n): it comes for prime numbers, number of prime number bellow n is log(n) in average (there is sqrt(n) search for each of them at most) – Saeed Amiri Dec 29 '10 at 11:51
  • @Saeed - no, that's wrong. The number of primes below `n` is `n / log(n) = O(n)`, not `log n`. Now do you see why that makes your algorithm `O(n sqrt n)`? – IVlad Dec 29 '10 at 12:38
  • Ooooh @lVald you are right! I said (in my mind!) after each log(n) number I'll find another prime number so the number of primes is log(n)!!!! yes I should to go school to learn division :) – Saeed Amiri Dec 29 '10 at 13:26
  • @IVlad (almost 4 years later but still...) actually, `n / log(n) ~ o(n)`, with the *little* `o`. O(n/log(n)) is a complexity class on its own. When [approximated as `n^a(n)`](http://en.wikipedia.org/wiki/Analysis_of_algorithms#Empirical_orders_of_growth), the `a(n)` will always be smaller than 2.0, converging to 2.0 from below. – Will Ness Oct 16 '14 at 19:05
  • @WillNess - ok, but that is still not `O(n)`, is it? – IVlad Oct 16 '14 at 20:00
  • @IVlad this is precisely the point, that it is *not* O(n) - its either O(n/log n), or o(n), whichever you like. you said "n / log(n) = O(n)" in a comment above, and I wanted to correct that. :) – Will Ness Oct 16 '14 at 20:04
  • @WillNess - oh, I thought you were talking about a different statement. In that case, I don't see why saying `n / log n` is `O(n)` is wrong - isn't `n / log n` bounded above by `n` for sufficiently large values of `n`? That is what big-oh notation deals with, upper bounds. My point was that `log n` is not an upper bound for it, while `n` is. – IVlad Oct 16 '14 at 20:21
  • @IVlad it's the difference between O and o. [there _is_ a difference](https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_notations). `n/log n` is not *bounded* by n from above (`< k*n`, for some k, for sufficiently large n), it's *dominated* by n (`< k*n`, for _any_ k, however small, for sufficiently large n). – Will Ness Oct 17 '14 at 00:40
  • @IVlad if *f* in *O(n^a)* [can be approximated by `k*n^b`](http://en.wikipedia.org/wiki/Analysis_of_algorithms#Empirical_orders_of_growth) it will have `b <= a`; but if *f* is in *o(n^a)*, its approximation by `k*n^b` will always have `b < a`. I think. – Will Ness Oct 17 '14 at 00:50
  • @WillNess - I know there is a difference, but I don't see how that is relevant here. The formal definition of big-oh says that `n / log n` is `O(n)` if we can find a `k` and an `n0` such that for all `n > n0`, we have `n / log n <= k * n`. For all `n > 10`, we have `n / log n <= 1 * n`, for example. Therefore, `n / log n` is `O(n)`. – IVlad Oct 17 '14 at 05:30
  • If you say it's `< k*n` for **any** `k`, then you agree that there exists **some** `k` for which it's `< k*n`. The big-oh definition does not specify that there must exist a unique such `k`, you can have an infinity. What you are saying about little-oh is irrelevant. We can both be right: me about it being `O(n)` and you about it being `o(n)` (haven't given that much thought yet though, but it looks right). – IVlad Oct 17 '14 at 05:33
  • As an aside, it is also `O(n^2), O(n^3) and O(n^googol)`. Big-oh can be abused like that simply because it tells us something about an upper bound. Sometimes it's helpful, other times not so much. Big-theta is usually more helpful because it gives a tight bound, but it was enough in this case to make my original point. – IVlad Oct 17 '14 at 05:36
  • @IVlad O(n) is an overstatement, just like O(n^2) is an overstatement. e.g. `2n` is not in o(n), it is only in O(n). `n/log(n)` is in both O(n) and o(n), and saying one subsumes the other (but not the other way around), so saying it's in o(n) is *more informative* -- just like saying `2n` is in O(n) is more informative than `O(n^2)` because one subsumes the other (but not the other way around). o(n) is a tighter upper-bound than O(n). whether this is relevant to your original point, I have no idea. :) I just know the two are different. And in practice the algo in o(n) will be significantly – Will Ness Oct 17 '14 at 07:40
  • ... faster than the one in O(n) - at any range above a certain point. – Will Ness Oct 17 '14 at 07:43
  • @WillNess - you are correct about that, of course better bounds are more informative. However, since it is also `O(n)`, I was not **wrong** in saying that :). – IVlad Oct 17 '14 at 10:26
  • @IVlad correct. (I didn't use the word "wrong"...) well, isn't it a bit like saying "I never make more than a million a year", on a five-figures salary. Also not wrong, just a little bit imprecise. :) – Will Ness Oct 17 '14 at 11:28
3

What you need is the Sieve of Forever, here's the code for a recursive prime tester, I think it's quite efficient because it only needs to test the prime factors, let me know what you think ;)

By the way, I wouldn't try it with anything above a byte, it seems to take a while with anything over 100.

public boolean isPrime(byte testNum)
{
    if ( testNum <= 1 )
        return false;
    for ( byte primeFactor = 2; primeFactor < testNum; primeFactor++ )
        if ( isPrime(primeFactor) )
            if ( testNum % primeFactor == 0 )
                return false;
    return true;
}
nathan
  • 41
  • 1
  • 2? -- 3?2? -- 4?2? -- 5?2?3?2?4?2? -- 6?2? -- 7?2?3?2?4?2?5?2?3?2?4?2?6?2? -- 8?2? -- 9?2?3?2? -- 10?2? -- 11?2?3?2?4?2?5?2?3?2?4?2?6?2?7?2?3?2?4?2?5?2?3?2?4?2?6?2?8?2?9?2?3?2?10?2? -- ... this is hardly fast, let alone fastest. :) – Will Ness Feb 15 '14 at 18:41
  • so no, it is not efficient at all. it is *extremely* inefficient. you make gi-nor-mous amounts of work to spare one measly meager remainder test. this well might be the most inefficient impl I ever saw, ever. :) – Will Ness Feb 15 '14 at 18:53
  • 2
    This is magnificently slow, and I love it. ECPP and AKS are polylogarithmic, trial division is polynomial, and this is exponential. Where else can you see gaps like that? – Charles Feb 16 '15 at 21:57
  • so, it's `isPrime n = n > 1 && []==[f | f <- [2..n-1], isPrime f && rem n f == 0]`. (and the call to `isPrime f` is entirely superfluous, of course). still the champion in slowness. – Will Ness Aug 20 '16 at 15:27
1

First, if you want to generate large prime numbers (as opposed to test integers for primality) then Pocklington's theorem comes in handy. This Theorem allows a fast primality test for a candidate p if you know enough prime factors of p-1. Hence the following method is possible: Generenerate a few primes, compute a suitable multiple of their product and test using Pocklington's theorem. If you want to find large prime numbers (e.g. for the RSA cryptosystem) then you will have to apply this method recursively for generating the factors of p-1.

The description above lacks quite a few details. But the method has been analyzed in depth. I think this paper was the fastest method when if was published, though some time has gone by since then and someone might have improved it.

P.Mihailescu. "Fast Generation of Provable Primes using Search in Arithmetic Progressions", Proceedings CRYPTO 94, Lecture Notes in Computer Science vol 939, Springer 1994, pp. 282-293.

Accipitridae
  • 3,136
  • 19
  • 9
0

Why recursively?

Use better prime number generation algorithm like Sieve of Eratosthenes or even better Sieve of Atkin.

Prasoon Saurav
  • 91,295
  • 49
  • 239
  • 345
  • in the question it is mentioned that : to find it and generate( prime numbers) it recursively –  Dec 29 '10 at 05:49