I've programmed simple implementation of sieve of Eratosthenes for calculating primes in given range. Initially I planned to use long int
as main integer type. Calculating all primes lesser then 10 milion took something between 11-12 seconds. But when I switched type to int
, I was getting about 4-5 seconds instead. Therefore I wonder - why is it?
I understand that long int
takes more space than int
, but I'm running 64-bit CPU - so - if I understand correctly - it has ALU that accepts 64-bit integers i.e. instructions on such numbers can be executed in same number of cycles. My prior research revealed similar questions beeing asked about performance difference between unsigned
and signed
types ([here][1] or [here][2]), but I'm using signed
variables and I still get performence drop for the same (lesser then 2^33-1) input. Below I post my full code.
#include <iostream>
#include <fstream>
int main()
{
unsigned long int X, i, j;
bool *prime, *primeFill;
std::fstream outFile;
clock_t T;
/* user interaction */
std::cout << "This applet will export all prime number from range (0, X). Enter desired X value: ";
std::cin >> X;
primeFill=new bool [X];
for (i=2; i<X; i++)
primeFill[i]=false;
prime = new bool [X];
/* main loop, where all actual computation happens */
T=clock(); i=2;
while(i<X) {
prime[i]=true, primeFill[i]=true;
for (j=2*i; j<=X; j+=i) prime[j]=false, primeFill[j]=true;
while (primeFill[i])
i++;
}
T=clock()-T;
/* outputing results to textfile & standard output */
outFile.open("primes.txt", std::ios::out);
for (i=2; i<X; i++)
if (prime[i])
outFile << i << '\n';
outFile.close();
std::cout << "Finished. Computation took " << (1000*float(T)) / CLOCKS_PER_SEC << " ms\n";
return 0;
}
EDIT New code above doesn't produce such problem. Refer to my answer below.