14

As the title states: Is there any general "rule of thumb" about the size of the stack. I'm guessing the size will vary depending on the OS, the architecture, the size of the cache(s), how much RAM is available etc.

However can anything be said in general, or is there any way to find out, how much of the stack, this program is allowed to use?. As a bonus question is there any way (with compiler flags etc. (thinking mostly C/C++ here, but also more general)) that the size of the stack can be set to a fixed size by the user?

Btw, I'm asking strictly out of curiosity, I'm not having a stack overflow. :)

RobD
  • 1,695
  • 2
  • 24
  • 49
Andersnk
  • 857
  • 11
  • 25
  • 1
    The amount of stack a given program will use is, in general, undecidable (it's equivalent to the [Halting problem](http://en.wikipedia.org/wiki/Halting_problem)). Are you asking how you can explicitly force a limited stack size? – Oliver Charlesworth Mar 11 '13 at 00:06
  • This question may be of interest to you http://stackoverflow.com/questions/156510/increase-stack-size-on-windows-gcc – Ganesh Mar 11 '13 at 00:07
  • First of all thanks! But why is this undecidable and equivalent to the halting problem? – Andersnk Mar 11 '13 at 00:22
  • @Anders: The stack size available to your program is very well defined. The amount of stack an arbitrary program will require for correct operation is similar to the Halting problem for that program (and is only undecidable in general -- most specific programs permit analysis). – Ben Voigt Mar 11 '13 at 00:24
  • @AndersNannerupKristensen: Because in order to determine maximum stack usage, you essentially need to analyse all possible code paths (which I'm sure you can see is very similar to the problem imposed by the Halting Problem). In some (perhaps many) cases, though, this can be figured out via static analysis. But recursion or function pointers make this tricky. – Oliver Charlesworth Mar 11 '13 at 00:28
  • "is only undecidable in general -- most specific programs permit analysis" -- correct. I just heard Vint Cerf make this common mistake, claiming one program can't figure out what another program does because of the HP. But the HP only says that there is no program that can determine whether *any possible* program halts. This has virtually no practical consequences since, for instance, it's possible to determine whether any program that uses a bounded amount of storage halts. "But recursion or function pointers make this tricky." -- Irrelevant; HP is about formal impossibility, not difficulty. – Jim Balter Mar 11 '13 at 01:00
  • "Are you asking how you can explicitly force a limited stack size?" -- That's what he explicitly asked. Obviously, compiler flags are irrelevant to how much stack an algorithm requires. – Jim Balter Mar 11 '13 at 01:06
  • @JimBalter: I was using those as examples of when a codebase might transition from "obviously statically analysable" to "oh, yeah that would be tricky to analyse" (as a hand-waving precursor to "impossible"). – Oliver Charlesworth Mar 11 '13 at 01:29
  • @JimBalter: He also asked "how big is the stack memory for a certain program". But I agree, on a second read, it's obvious what was being asked here! – Oliver Charlesworth Mar 11 '13 at 01:30
  • @JimBalter: However, this certainly isn't my area, so if you say that it's possible to determine whether a bounded-storage program halts, I'll choose to believe you ;) – Oliver Charlesworth Mar 11 '13 at 01:37
  • @Oli Bounded memory implies a bounded number of states. For a given program, each of those states can be mapped to the next successive state. Then, for any starting state, you (that is, a Turing Machine, which isn't limited by a lifespan or even the duration of the universe) can determine whether it reaches a halt state or it's part of a loop. – Jim Balter Mar 11 '13 at 01:52
  • @JimBalter: I see, thanks. So the method is to essentially simulate the program until either you hit a global state you already encountered, or you hit a halt state, both of which would occur in finite time. In that light, the HP is indeed totally irrelevant here! – Oliver Charlesworth Mar 11 '13 at 01:59
  • @Oli Right. The HP is often misapplied to *impractical* analysis when it actually only pertains to a generalization across all programs for which there is a *formal* mathematical proof of impossibility via reductio ad absurdum involving self reference a la Godel's theorem (which can in fact be proven via the HP proof). – Jim Balter Mar 11 '13 at 02:06
  • Unrelated: in the edit queue, I came across a question that you voted on in triage. You made the wrong choice there. Please: study the help for triage really carefully, and avoid putting items into the edit queue that don't belong there. Please understand that your votes have consequences! I am specifically talking about https://stackoverflow.com/review/triage/20908430. That should have been closed as MCVE (no error messages given) or "unclear"! Feel free to drop me a comment in case you have further questions or feedback for me. – GhostCat Sep 20 '18 at 06:56

2 Answers2

6

In Windows the default stack size for a thread is a million bytes, regardless of operating system, etc.

In managed code (C#, VB, etc) you can force a new thread to have a different stack size with this ctor:

http://msdn.microsoft.com/en-us/library/5cykbwz4.aspx

To change the stack size of the default thread of a Windows program, whether it is managed or not, you can use the editbin utility:

http://msdn.microsoft.com/en-us/library/xd3shwhf.aspx

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
  • 1
    Actually, the default size (if you pass 0 to `CreateThread`) is the same as the stack size of the startup thread (as specified in the PE header). – Ben Voigt Mar 11 '13 at 00:18
  • @BenVoigt: I did not know that, but that makes perfect sense. Thanks! – Eric Lippert Mar 11 '13 at 04:46
  • Just additional information: The API for creation of native threads (which allows specifying the stack size) is [`CreateThread`](http://msdn.microsoft.com/en-us/library/windows/desktop/ms682453.aspx) – Ben Voigt Mar 11 '13 at 05:04
5

Yes you can set the stack size, it usually is a linker flag, and it depends on your toolchain (typically this is referred to by the name of the compiler).

You will also find several existing questions here on StackOverflow.

Ben Voigt
  • 277,958
  • 43
  • 419
  • 720
  • `--stack` in GCC/GNU-ld is Windows-specific. It's not used on other targets. – R.. GitHub STOP HELPING ICE Mar 11 '13 at 01:12
  • @R..: Is the option spelled differently (`-stack_size`) or you think there is no option? – Ben Voigt Mar 11 '13 at 01:16
  • 1
    There is no option. On unix systems, maximum stack size for the initial thread is determined by the `ulimit` command/`setrlimit` function, but on Linux, the reserved/committed stack size is fixed at approximately 128k plus a little extra that seems to depend on the environment (132k or 136k is typical total). If the program attempts to grow the stack beyond that, and there's memory free, it can grow up to the limit set by `ulimit`/`setrlimit`, but there's no way to reserve more than ~128k at the moment `exec` takes place. – R.. GitHub STOP HELPING ICE Mar 11 '13 at 01:20
  • @R..: Well, I know that most (maybe all) of the embedded targets I use have such a linker option. So it seems that Linux is the odd man out. And I don't think that "memory free" is relevant, unless you have disabled swap and overcommit, the only thing that matters is whether there's enough contiguous virtual address space. – Ben Voigt Mar 11 '13 at 01:24
  • I normally assume overcommit is disabled. Leaving overcommit enabled results in a horribly buggy, non-conforming, dangerously-unstable system. :-) Anyway, even if overcommit is enabled, the attempt to enlarge the stack can be what triggers the OOM killer to kill your process. – R.. GitHub STOP HELPING ICE Mar 11 '13 at 01:26
  • @R..: Still, modern systems are much more likely to have problems with virtual address space fragmentation than run out of swap. And you should take your dislike of overcommit up with the kernel developers. – Ben Voigt Mar 11 '13 at 01:26
  • The stack's growth normally won't be inhibited by address space fragmentation; the kernel reserves a virtual address range equal to the `RLIMIT_STACK` size, up to a reasonable limit of at least 8 MB or so, for future stack growth. It just doesn't reserve commit charge. – R.. GitHub STOP HELPING ICE Mar 11 '13 at 01:28
  • @R..: Your comments above are confusing reserve and commit, then, because you earlier said only ~132k was reserved. – Ben Voigt Mar 11 '13 at 01:34
  • OK, to clarify: Virtual address space space corresponding to the value of `RLIMIT_STACK` is reserved for the stack; no other maps will be allocated within that distance of where the stack begins. Only at most ~132k is accounted as commit charge against physical backing (ram/swap) however, and there is no way to increase this that I know. If you need more committed, you should probably stomp all over the stack at the beginning of `main`...yes that's really ugly, I know. On the other hand if you want to commit less, lowering `RLIMIT_STACK` should do it. – R.. GitHub STOP HELPING ICE Mar 11 '13 at 01:53