0

I have implemented the below version of the Sieve of Eratosthenes that allocates memory on the heap to store the array representing prime numbers.

void sieve(int *primes, int n) {
   for (int i = 0; i < n - 1; i++) {
      primes[i] = 1;
   }

   for (int p = 2; p <= n; p++) {
      if (primes[p - 2] == 1) {
         for (int i = 2*p - 2; i < n; i += p) {
            primes[i] = 0;
         }
      }
   }
}
void print_sieves(int n) {
   if (n < 3) return;

   int *primes = (int*) malloc((n - 1) * sizeof(int));
   sieve(primes, n);
   int print_counter = 0;

   for (int i = 0; i < n - 1; i++) {
      if (primes[i] == 1) {
         printf("%10d ", i + 2);
         print_counter++;

         if (print_counter % COLUMNS == 0)
            printf("\n");
      }
   }
   free(primes);
   printf("\n");
}

For most arguments passed into print_sieves the program works as expected, however, when passing 15 into print_sieves I end up with the following error:

a.out: malloc.c:2392: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.
Aborted (core dumped)

For a slightly different version of this program, in which I instead allocate memory on the stack to store the array representing prime numbers, I run into a different error. Namely, when trying to find too large primes, for example when calling print_sieves with the argument 10000000, I encounter the error Segmentation fault (core dumped). The implementations of sieve are identical for both versions of the program, but print_sieves have slight differences. Below is the code for my print_sieves function that allocates memory on the stack:

void print_sieves(int n) {
   if (n < 3) return;

   int primes[n - 1];
   sieve(primes, n);
   int print_counter = 0;

   for (int i = 0; i < n - 1; i++) {
      if (primes[i] == 1) {
         printf("%10d ", i + 2);
         print_counter++;

         if (print_counter % COLUMNS == 0)
            printf("\n");
      }
   }
   printf("\n");
}

My guess is that I do not manage the memory properly, but I do not know where I am going wrong. What could be causing these errors and how can they be resolved?

mooncow
  • 413
  • 4
  • 11
  • 5
    In both variants you're allocating space for n - 1 ints, while looping up to and including n. Try allocating space for n + 1 instead – hager Sep 23 '19 at 13:25
  • @hager Allocating space for n + 1 ints resolves the first error, i.e. the error that arises when calling `print_sieves(15)` for the heap-based implementation, but the second error persists. – mooncow Sep 23 '19 at 13:32
  • Allocating space for `n` ints is also sufficient to solve the first problem. I wonder though why it was insufficient to allocate space for `n - 1` ints because I when passing the argument `n` I check all integers in the closed interval `[2, n]`, of which there are only `n - 1`. – mooncow Sep 23 '19 at 13:42
  • In the second scenario, you're trying to allocate an array of 10000000 `int`s on the stack, and that's most likely what leads to the segmentation fault – hager Sep 23 '19 at 13:45
  • @hager So are you suspecting a type stack overflow in the second scenario? And, if so, do you know of any way to check how much space there is on the stack so that this suspicion can be verified? – mooncow Sep 23 '19 at 13:49
  • It appears that the approximate maximum argument that can be passed to `print_sieves` in the second scenario is `2090000`. – mooncow Sep 23 '19 at 13:53
  • 1
    That depends on what platform you're running on, on my linux desktop the default max stack size is 8192 kB. You appear to have the same limit, since `2090000 * sizeof(int)` just barely fits on the stack then. – hager Sep 23 '19 at 14:00
  • 1
    It's not the cause of your problem, but while we're here, I recommend that you read and understand [the question on why not to cast the return value of `malloc()` and family in C](/q/605845). Prefer `int *primes = malloc(sizeof *primes * (n+1));` – Toby Speight Sep 23 '19 at 14:54

1 Answers1

0

Since you allocate memory for n - 1 elements, I suggest to change the innermost loop in sieve() from

         for (int i = 2*p - 2; i < n; i += p) {

to

         for (int i = 2*p - 2; i < n - 1; i += p) {

in order to make the index limit match the allocated size.

With n = 10000000 you most probably get a stack overflow.

You could use getrlimit()/setrlimit() to get/set the stack size. It might be difficult to calculate the limit for the array size from the maximum stack size because you would have to find out how much stack is needed besides the large array.

See also Increase stack size in Linux with setrlimit

Bodo
  • 9,287
  • 1
  • 13
  • 29