0

I know that local arrays are created on the stack, and have automatic storage duration, since they are destroyed when the function they're in ends. They necessarily have a fixed size:

{
   int foo[16];
}

Arrays created with operator new[] have dynamic storage duration and are stored on the heap. They can have varying sizes.

{
    const int size = 16;
    int* foo = new int[size];
    // do something with foo
    delete[] foo;
}

The size of the stack is fixed and limited for every process.

My question is: Is there a rule of thumb when to switch from stack memory to heap memory, in order to reduce the stack memory consumption?

Example:

  • double a[2] is perfectly reasoable;
  • double a[1000000000] will most likely result in a stack overflow, if the stack size is 1mb

Where is a reasonable limit to switch to dynamic allocation?

schorsch312
  • 5,553
  • 5
  • 28
  • 57
  • Very close to [this question](https://stackoverflow.com/q/48952362/841108) and my answer there – Basile Starynkevitch Feb 28 '18 at 12:40
  • 1
    Rule of thumb: if it fits on the stack, allocated them on the stack (local variable), if it's too big to fit on the stack, allocate dynamically. – Jabberwocky Feb 28 '18 at 12:41
  • 4
    First of all I would rather recommend [`std::array`](http://en.cppreference.com/w/cpp/container/array) instead of plain arrays, or [`std::vector`](http://en.cppreference.com/w/cpp/container/vector) instead of your own dynamic allocation. Then as for the "break point" where one should start using `std::vector` instead of `std::array`, that really depends on the use-case. The "rule of thumb" is that the element counter is "small enough", but how small that "small enough" is, is not something that can be generally advised. – Some programmer dude Feb 28 '18 at 12:43
  • 2
    For stack-allocated array, you need to know its size at _compile time_. In practice, size of data structures is usually not known until _runtime_. (_Small buffer optimization_ can be of your interest if you want to dig deeper into the problem.) – Daniel Langr Feb 28 '18 at 12:48

1 Answers1

3

See this answer for a discussion about heap allocation.

Where is a reasonable limit to switch to dynamic allocation?

In several cases, including:

  • too large automatic variables. As a rule of thumb, I recommend avoiding call frames of more than a few kilobytes (and a call stack of more than a megabytes). That limit might be increased if you are sure that your function is not usable recursively. On many small embedded systems, the stack is much more limited (e.g. to a few kilobytes) so you need to limit even more each call frame (e.g. to only a hundred bytes). BTW, on some systems, you can increase the call stack limit much more (perhaps to several gigabytes), but this is also a sysadmin issue.

  • non LIFO allocation discipline, which happens quite often.

Notice that most C++ standard containers allocate their data in the heap, even if the container is on the stack. For example, an automatic variable of vector type, e.g. a local std::vector<double> autovec; has its data heap allocated (and released when the vector is destroyed). Read more about RAII.

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547