2

Hello I've recently started testing out QuickSort. I've written a program that makes array with the size of user's input and fills it with random numbers, then it uses quicksort to sort it. Now here is my problem. On linux machine with just 4gb of RAM I could make array with a size up to 10^8 before computer would become unusable. On my mac with 8gb of RAM I can only make an array with a size up to 10^6. If I try making an array with size of 10^7 and greater I get segmentation fault. Is it some hard restriction from operating system, can it be changed? Here is my code :

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int FindPivot(int i, int j);
void PrintTable(int* table, int size);
void NextThing(int* table, int size);
void QuickSort(int* table, int i, int j);

void RandomizeTable (int* table, int size) { 

  int i;
  for (i = 0; i < size; i++) {
    table[i] = -10000 + rand() % (10000+1-(-10000));
    //printf("%d\t", table[i]);
  }
  printf("\n");
  NextThing(table, size);
}

void NextThing(int* table, int size) { 
  printf("Sorting the table...\n");
  clock_t x = clock();
  QuickSort(table, 0, size - 1);
  clock_t y= clock();
  printf("Time it took : %fs\n", ((double)(y - x))/CLOCKS_PER_SEC);

  //Second sorting of the table, just to see how long does it take for quicksort to sort an already sorted table
  printf("Sorting the table...\n");
  clock_t x2 = clock();
  QuickSort(table, 0, size - 1);
  clock_t y2= clock();
  printf("Time it took : %fs\n", ((double)(y2 - x2))/CLOCKS_PER_SEC);


  exit(0);
}

void Swap(int* table, int i, int j) {
  int temp;
  temp = table[i];
  table[i] = table[j];
  table[j] = temp;
}

int Partition(int* table, int i, int j) {
  int p, q, key; 

  p = FindPivot(i, j);
  key = table[p];

  Swap(table, i, p);

  for (p = i, q = i + 1; q <= j; q++)
    if (table[q] < key) {
      p++;
      Swap(table, p, q);
    }
    Swap(table, i, p);
  return p;
}

void QuickSort(int* table, int i, int j) {
  int p;
  if (i < j) {
    p = Partition(table, i, j);
    QuickSort(table, i, p - 1);
    QuickSort(table, p + 1, j);
  }
}//QuickSort 

void PrintTable(int* table, int size) {
    int i;
    for (i = 0; i < size; i++)
        printf("%d", table[i]);
    printf("\n");   
}

int FindPivot(int i, int j) {
  int pivot;
  /*pivot = i + rand() % (j + 1 - i); */ //Method I randomizing pivot
  pivot = (i + j) / 2; //Method II arithmetic avarage
  return pivot;
}



int main () {   

    time_t t;
    srand((unsigned) time(&t));

  int n;
  printf("Array size:");
  scanf("%d", &n);  
  int tab[n];     //Here is where error occurs if array size is > 10^6


  RandomizeTable(tab, n);

}

I am almost sure it's problem with making an array of such size. I've tried debugging the code with a printf. It printed text if it was before making an array (in main()) and wouldn't print it if I put it afterwards.

Victor
  • 23
  • 1
  • 4
  • Your notation `time_t t; srand((unsigned) time(&t));` is verbose: you can use `srand(time(0));` or, if you insist on the cast, `srand((unsigned)time(0));` — the [`time()`](http://pubs.opengroup.org/onlinepubs/9699919799/functions/time.html) function is happy with a null pointer as its argument. – Jonathan Leffler Jan 14 '19 at 20:18
  • 2
    On most Unix systems, you can create up to about 8 MiB on the stack. You should usually start using dynamic memory allocation long before you reach that point, though — start worrying if you create 1 MiB of data on the stack (if only because the limit is normally 1 MiB on Windows — but also because you probably don't want to eat up all the stack when using a recursive algorithm). – Jonathan Leffler Jan 14 '19 at 20:22
  • under linux there are certain limits on memory usage which could be hard or soft (depending on the kernel build). One of those is 'stacksize'. In csh you can use `limit` command to check and set those ( if they are soft). There might be something similar for mac. – Serge Jan 14 '19 at 20:59

1 Answers1

3

Assuming you're using C99,it may be implementation specific (but probably not), where the maximum size of any object is limited by SIZE_MAX, from this post, which means the minimum (in theory) could be less than 10^6 (1,000,000) bytes.

If this is the issue, you can check with something like

size_t max_size = (size_t)-1;

From here.

Otherwise, the other post is your best bet - if you can't implement it on the stack, using malloc can allocate it in the heap.

int *tab = malloc(n*sizeof(int));

This will allocate the (n * sizeof(int in bytes)) bytes in the heap, and should work.

Note that if you allocate memory this way, it'll need to be manually deleted, so you should call free on it when you're done.

free(tab)
Daniel
  • 854
  • 7
  • 18
  • Dynamic allocation is the answer here. – tadman Jan 14 '19 at 20:28
  • On most systems, `SIZE_MAX` is 2³²-1 (for 32-bit) and 2⁶⁴-1 (for 64-bit). Although in theory `SIZE_MAX` could be as small as 65535 (2¹⁶-1), that is more theoretical than actual on any desktop or server system — you'd be working with 8-bit microprocessors if the limit was that small. – Jonathan Leffler Jan 14 '19 at 20:30
  • The limit tends to be the same regardless of the version of C. However, in C90, `SIZE_MAX` was not a mandated macro; it was added to C99 (but is also, therefore, in C11 and C18). – Jonathan Leffler Jan 14 '19 at 20:32