The running time of "sieve of sundaram" for generating a list of prime numbers upto a number n is given O(n*log(n)), according to the link: http://en.wikipedia.org/wiki/Sieve_of_Sundaram. Is this algorithm better than "Sieve of Atkin" and if it is then elaborate a little about how exactly it works?
Asked
Active
Viewed 6,694 times
3
-
What do you mean by better? Better in practice or in theory? – starblue Mar 09 '11 at 21:02
-
1Atkin,Eratosthenes,Sundaram....in that order. – st0le Mar 13 '11 at 09:14
-
2@st0le, can you support putting the Sieve of Atkin (SoA) on top? I have [done research for this answer](http://stackoverflow.com/a/22161595/549617) that says that SoA only beats a **maximally optimized as to wheel factorization** Sieve of Eratosthenes (SoE) in very limited specific cases, and then only by a small margin if at all. The Atkin and Bernstein study was flawed in that they restricted the reference SoE implementation to only the same level of wheel factorization as is inherent to the SoA and corrupted their timing comparison by using a buffer size of 4 KB for SoE and 8 KB for SoA. – GordonBGood Apr 01 '14 at 06:16
-
1cont'd: the biggest problem with both the Sieve's of Atkin and Sundaram is efficient multi-processing as they have an ever increasing number of sequences up to the square root of the sieving range that need new start addresses calculated for every new segment page at an ever increasing computational cost. The Sieve of Eratosthenes has a much lower ratio of sequences only based on the base primes up to the square root of the range, which decrease in density with increasing range. This is also why Daniel Bernstein's "primegen" does not show empirical O(n) performance with increasing range. – GordonBGood Apr 04 '14 at 08:46
-
@Gordon: Inspecting the [source code of primegen](https://cr.yp.to/primegen.html) shows that Dan Bernstein gave his Atkin implementation a sieve buffer that is **four times** as big as the one he used for the Eratosthenes foil (2048 * 16 uints vs. 1001 * 8 uints). Also, his implementation approach does not interact well with modern memory cache systems. On this aging Lynnfield a MinGW-built `primespeed` counts the primes up to 10^9 in 0.68 seconds and `eratspeed` takes 0.70 (redirecting to a file!). I can beat that even in C# with an odds-only SoE, without reaching for C/C++ or a mod 30 wheel. – DarthGizka Jun 06 '16 at 11:14
-
When the programs are run printing to the console (instead of redirecting output) then I get 0.75 s for `primespeed` and 1.08 s for `eratspeed` because the latter prints four times as much output. This subtle little cheat is probably the original source for the mythos of Atkin being practically faster than Eratosthenes, instead of only asymptotically as a theoretical gimmick... Also, if the buffer size is doubled for `eratspeed` then it is suddenly faster than the Atkin-based `primespeed` (0.65 s vs. 0.68 s) even though the Atkin buffer is still twice as big. – DarthGizka Jun 06 '16 at 13:19
-
@DarthGizka, amen to what you say except that I don't think Bernstein cheated as far as re-direction goes as the timings don't include the print output. However, not making the buffer sizes the same, and limiting the eratospeed wheel factorization to the "mod 30 wheel", instead of a "mod 210 wheel' plus pre-culling the Eratosthenes buffer for base primes up to 19, as we are able to do, means that he didn't get all the speed from Eratosthenes as we are able to and thus would beat the segmented SoA. He also did not prove O(n) performance for the segmented SoA as "primegen" is much worse. – GordonBGood Jun 07 '16 at 01:27
-
@DarthGizka, it seems you may be using external timing since it makes a difference where you direct the console output; the built-in timing is probably more accurate as it excludes the output already. Re: "I can beat that even in C# with an odds-only SoE", I doubt you can do that, at least single threaded, as C# has built-in array bounds checks that increase culling operations by a couple of machine cycles each, and odds-only has about 2.5 times as many ops as "eratspeed"'s "mod 30 wheel". You can beat it in C# with "mod 210 wheel" plus pre-culling primes up to 19, and I have done that. – GordonBGood Jun 07 '16 at 07:22
2 Answers
9
In theory:
- The sieve of Sundaram has an arithmetic complexity O(n log n).
- The basic sieve of Eratosthenes has arithmetic complexity O(n log log n).
- Optimized variants of the sieve of Eratosthenes have arithmetic complexity O(n).
- The sieve of Atkin has not only arithmetic but also bit complexity O(n/log log n).
- A magical sieve where you are given the primes, in order, takes time O(n/log n).
In practice, the sieve of Sundaram is so slow that no one uses it, and the sieve of Atkin is slower than optimized Eratosthenes variants (although it's at least competitive). Perhaps one day Atkin or something else will displace Eratosthenes but it's not likely to happen soon. (Also, there's no such thing as magic.)

Charles
- 11,269
- 13
- 67
- 105
2
Well, the Wikipedia page for the Sieve of Atkin says:
This sieve computes primes up to N using O(N/log log N) operations
This is better than the Sieve of Sundaram, which is Θ(N log N) in operations (note that this is not O(N log N) -- there's a subtle difference between O() and Θ()).

CanSpice
- 34,814
- 10
- 72
- 86
-
2f(N) = Theta(N log N) implies f(N) = O(N log N). The converse is false though. – Alexandre C. Mar 08 '11 at 17:23
-
2@CanSpice, one has to be careful in using asymptotic computational complexity to choose an algorithm, as one still has to ask "Over what range?" and "What is the speed for a given N?". The Wikipedia article for the Sieve of Sundaram used to say that it can be made to have a O(N) if implemented with hash tables with O(1); however, even if that were true, using hash tables typically has an constant overhead of 10's of times more than not using them so the resulting O(N) algorithm would certainly never catch up to the Sieve of Eratosthenese with O(N Log Log N). – GordonBGood Apr 01 '14 at 07:17
-
1cont'd: The same can be said when comparing the Sieve of Eratosthenese (SoE) to the Sieve of Atkin (SoA) with relative performance of O(N log log N)/O(N) and O(N)/O(N / Log Log N) (pairs of performances for each, with the same optimizations also available for each, none of the 2nd which have ever been practically implemented) in that if the SoA had a higher constant factor overhead than the SoE (which it does when both are maximally optimized), then the SoA can actually be slower than the SoE for practical ranges. Also see [this article](http://rjlipton.wordpress.com/2009/02/13/o-abuse-2/). – GordonBGood Apr 01 '14 at 08:37