61

There are understandably many related questions on stack allocation

What and where are the stack and heap?

Why is there a limit on the stack size?

Size of stack and heap memory

However on various *nix machines I can issue the bash command

ulimit -s unlimited

or the csh command

set stacksize unlimited

How does this change how programs are executed? Are there any impacts on program or system performance (e.g., why wouldn't this be the default)?

In case more system details are relevant, I'm mostly concerned with programs compiled with GCC on Linux running on x86_64 hardware.

Community
  • 1
  • 1
Brian Hawkins
  • 2,813
  • 2
  • 25
  • 17

3 Answers3

34

When you call a function, a new "namespace" is allocated on the stack. That's how functions can have local variables. As functions call functions, which in turn call functions, we keep allocating more and more space on the stack to maintain this deep hierarchy of namespaces.

To curb programs using massive amounts of stack space, a limit is usually put in place via ulimit -s. If we remove that limit via ulimit -s unlimited, our programs will be able to keep gobbling up RAM for their evergrowing stack until eventually the system runs out of memory entirely.

int eat_stack_space(void) { return eat_stack_space(); }
// If we compile this with no optimization and run it, our computer could crash.

Usually, using a ton of stack space is accidental or a symptom of very deep recursion that probably should not be relying so much on the stack. Thus the stack limit.

Impact on performace is minor but does exist. Using the time command, I found that eliminating the stack limit increased performance by a few fractions of a second (at least on 64bit Ubuntu).

Alex V
  • 3,416
  • 2
  • 33
  • 52
  • 2
    with `gcc-4.7 -O3` your `smash_the_stack` example being tail-recursive is translated to an endless loop without any calls. – Basile Starynkevitch Jan 23 '13 at 06:44
  • 2
    @BasileStarynkevitch Compilers are so smart thse days. I altered the example to make it harder for gcc to optimize away, sorry about that! – Alex V Jan 23 '13 at 21:03
  • 1
    Even the improved example with a `printf` before the recursive call is compiled to a loop with `gcc-4.7 -O3` – Basile Starynkevitch Jan 24 '13 at 05:59
  • 5
    @BasileStarynkevitch Damn. I think the point is made pretty clearly. If you want gcc to not optimize away the stack smashing for educational expirementation just compiled it with `gcc -O0`. – Alex V Jan 24 '13 at 06:52
  • 1
    No, the good way is to avoid tail recursion. Put some useful code before and after the recursive call. – Basile Starynkevitch Jan 24 '13 at 07:02
  • @BasileStarynkevitch I tested this code 4.4 (I do not have 4.7 on this system) and it is not compiled to a loop. – Alex V Jan 25 '13 at 00:35
  • 3
    @Maxwell Two mistakes: the stack size limit has nothing to do with preventing the "whole system" from crashing. RAM is RAM, it's the kernel's job to decide how to map it for stack or heap. Making a 100GB stack is no more harmful than allocating 100GB on the heap: the operations will either fail (sbrk or exec fail), or there'll be an overcommit and processes will be killed when you use the memory until the system can honour its memory commitments again. In either case, the integrity of the whole system is safe. There's nothing a process can do to defeat the kernel. – Nicholas Wilson Jul 01 '13 at 14:13
  • 7
    @Maxwell Secondly, exceeding the stack size is a completely different problem to stack smashing. When a stack overflow happens, the kernel kills the process and that's that. Nothing is written to the stack that shouldn't be, and no harm results (apart from process termination). – Nicholas Wilson Jul 01 '13 at 14:15
  • 4
    This answer is plain wrong. Each process on Linux has its own stack - no such thing as system stack space. Stack grows downwards on x86 and an overflow occurs when the top of the stack (physically the bottom of the stack memory) hits the pre-set limit or meets another memory mapping. Stack buffer overflows occur in the opposite direction. It would be hard to overwrite the return address if the stack would instead grow upwards. – Hristo Iliev May 11 '16 at 07:17
4

ulimit -s unlimited lets the stack grow unlimited.

This may prevent your program from crashing if you write programs by recursion, especially if your programs are not tail recursive (compilers can "optimize" those), and the depth of recursion is large.

Jean-François Fabre
  • 137,073
  • 23
  • 153
  • 219
Abhishek Anand
  • 3,789
  • 2
  • 21
  • 29
  • Why not make "unlimited" the default stack size? My use case doesn't involve recursion, but rather old Fortran programs with big static arrays that exceed the default on most systems. – Brian Hawkins Feb 15 '17 at 22:28
  • because a completely buggy program could crash your system. This option must be used only if you trust the program not to eat all available memory – Jean-François Fabre Jun 19 '20 at 07:29
  • A buggy program can also keep allocating from the heap (malloc) and crash/freeze the system. Systems don't typically put heap limits. – Abhishek Anand Jun 24 '20 at 23:49
2

stack size can indeed be unlimited. _STK_LIM is the default, _STK_LIM_MAX is something that differs per architecture, as can be seen from include/asm-generic/resource.h:

/*
 * RLIMIT_STACK default maximum - some architectures override it:
 */
#ifndef _STK_LIM_MAX
# define _STK_LIM_MAX           RLIM_INFINITY
#endif

As can be seen from this example generic value is infinite, where RLIM_INFINITY is, again, in generic case defined as:

/*
 * SuS says limits have to be unsigned.
 * Which makes a ton more sense anyway.
 *
 * Some architectures override this (for compatibility reasons):
 */
#ifndef RLIM_INFINITY
# define RLIM_INFINITY          (~0UL)
#endif

So I guess the real answer is - stack size CAN be limited by some architecture, then unlimited stack trace will mean whatever _STK_LIM_MAX is defined to, and in case it's infinity - it is infinite. For details on what it means to set it to infinite and what implications it might have, refer to the other answer, it's way better than mine.

Jean-François Fabre
  • 137,073
  • 23
  • 153
  • 219
favoretti
  • 29,299
  • 4
  • 48
  • 61
  • That doesn't seem to be true. On my system the value in linux/resource.h is 10*1024*1024. This is also the value printed by "ulimit -s". I compiled a test program with a 20 MB static array, and it segfaults. However, if I issue the command "ulimit -s unlimited" it does not crash. – Brian Hawkins Jan 23 '13 at 17:30
  • Hmm, that's interesting. I was under the impression that it wouldn't go over that limit. – favoretti Jan 23 '13 at 18:17
  • @BrianHawkins: mea culpa, did some more research, adjusted answer. It this info is irrelevant I can remove the whole answer alltogether. – favoretti Jan 23 '13 at 18:35