0

in linux, at least in SUSE/SLES version 11, if you type 'limit' it will respond with

cputime      unlimited
filesize     unlimited
datasize     unlimited
stacksize    8192 kbytes
coredumpsize 1 kbytes
memoryuse    449929820 kbytes
vmemoryuse   423463360 kbytes
descriptors  1024
memorylocked 64 kbytes
maxproc      4135289
maxlocks     unlimited
maxsignal    4135289
maxmessage   819200
maxnice      0
maxrtprio    0
maxrttime    unlimited

those are my default settings in /etc/security/limits.conf

I have a C program that is around 5500 lines of code, minimal comments. I declare some big arrays, the program deals with mesh structures so there is a structured "nodes" array having x,y,z variables of type double, along with some other integer and double variables if needed. And there's a structured "elements" array having n1, n2, n3 variables of type integer. In many of the functions I'll declare something like

struct NodeTYPE nodes[200000];
struct ElemTYPE elements[300000];

When running the program, it spits out a menu to choose what to do, and you enter a number. Enter a 1 then calls some function, enter a 2 calls some different function, and so on. Most of them work but one did not, and debugging the program as soon as the function is called it segmentation faults.

If i modify /etc/security/limits.conf and do

* hard stacksize unlimited
* soft stacksize unlimited

then the program works without changing anything in the code and without recompiling. The program is compiled via

gcc myprogram.c -O2 -o myprogram.x -lm

Can someone explain in some detail why this happens, and what the best method for fixing this type of problem?

Is it poor programming on my part causing the segmentation fault? I want to say in the past... when i figured out making stacksize unlimited... if i put my large arrays globally in the program outside of main() then the program would not seg fault... it was only when those big array declarations were within functions that causes this problem.

Is having a limit on stacksize of 8MB ridiculously small (my system has over 128GB of RAM) ?

ron
  • 967
  • 6
  • 23
  • You could allocate large arrays like those on the heap using `malloc`. This might help you: [What and where are the stack and heap?](http://stackoverflow.com/questions/79923/what-and-where-are-the-stack-and-heap) – 001 Jan 05 '16 at 15:34
  • thanks, the stack vs heap description helped. – ron Jan 05 '16 at 18:01
  • did some math, if my limits.conf file has 8192kb as stack limit, having "double x,y,z" as the NodeTYPE structure is 24 bytes for each node. If it's struct NodeTYPE node[100000] then that's 2,400,400 / 1024 / 1024 = 2.29mb just for that. So based on how big i declare my node and element arrays in each function just that can easily exceed the 8mb stack limit. So my big question is, what is the reason behind the default stack limit of just 8MB? what rationale can be given for not increasing it from the default? – ron Jan 05 '16 at 18:07
  • because i don't wanna rewrite the program to use malloc. :) – ron Jan 05 '16 at 18:09

0 Answers0