in linux, at least in SUSE/SLES version 11, if you type 'limit' it will respond with
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 8192 kbytes
coredumpsize 1 kbytes
memoryuse 449929820 kbytes
vmemoryuse 423463360 kbytes
descriptors 1024
memorylocked 64 kbytes
maxproc 4135289
maxlocks unlimited
maxsignal 4135289
maxmessage 819200
maxnice 0
maxrtprio 0
maxrttime unlimited
those are my default settings in /etc/security/limits.conf
I have a C program that is around 5500 lines of code, minimal comments. I declare some big arrays, the program deals with mesh structures so there is a structured "nodes" array having x,y,z variables of type double, along with some other integer and double variables if needed. And there's a structured "elements" array having n1, n2, n3 variables of type integer. In many of the functions I'll declare something like
struct NodeTYPE nodes[200000];
struct ElemTYPE elements[300000];
When running the program, it spits out a menu to choose what to do, and you enter a number. Enter a 1 then calls some function, enter a 2 calls some different function, and so on. Most of them work but one did not, and debugging the program as soon as the function is called it segmentation faults.
If i modify /etc/security/limits.conf and do
* hard stacksize unlimited
* soft stacksize unlimited
then the program works without changing anything in the code and without recompiling. The program is compiled via
gcc myprogram.c -O2 -o myprogram.x -lm
Can someone explain in some detail why this happens, and what the best method for fixing this type of problem?
Is it poor programming on my part causing the segmentation fault? I want to say in the past... when i figured out making stacksize unlimited... if i put my large arrays globally in the program outside of main() then the program would not seg fault... it was only when those big array declarations were within functions that causes this problem.
Is having a limit on stacksize of 8MB ridiculously small (my system has over 128GB of RAM) ?