2

Let me take an example to explain my problem,

Case I

#include<iostream>
using namespace std;
int main(){
   int n;
   cin>>n;
   int a[n];
   for(int i=0;i<n;i++)
       cin>>a[i];
}

Case II

#include<iostream>
using namespace std;
int main(){
   int n;
   cin>>n;
   int *a = new int[n];
   for(int i=0;i<n;i++)
       cin>>a[i];
}

Correct me if I am wrong, to my understanding, Case I falls under static memory allocation domain and Case II falls under dynamic memory allocation domain. So if I am able to achieve the same functionality by static memory allocation, why use dynamic.

In both the above cases, I am able to achieve the same functionality, but why is Case I considered bad and Case II the correct way.

The only difference in both the codes is line number 6.

  • 3
    No, case 1 is stack memory allocation, and furthermore [is a non-standard `g++` extension](https://stackoverflow.com/q/3324312/1270789), so it is better not to use it, IMO. – Ken Y-N Oct 19 '20 at 11:01
  • Case I isn't considered "bad". It just won't work at all. The compiler won't understand you. The language could've been designed to do a dynamic allocation for you, but they instead chose to force you to make dynamic allocations more explicit. – Elliott Oct 19 '20 at 11:02
  • 2
    Try to compile both, input `100000000` and see which one crashes. – Yksisarvinen Oct 19 '20 at 11:04
  • @Yksisarvinen I tried both with input size of 100000000, **case II** worked well but **case I** crashed. I am sure this is because in **case I** we are taking memory from stack which is a scarce resource, where as **case II** uses heap memory which is available in abundance. – Kunal Saraf Oct 19 '20 at 11:12
  • 1
    That's correct. There are two things to consider: portability and available memory. For portability concerns, see answer below. Case I is non-standard and only certain compilers accept it. If you don't care about that, because you are sure you will only ever use one compiler for your code, your next concern is available memory in both areas. Is stack going to be enough for your use? Based on these, you can select which solution will suit you. – Yksisarvinen Oct 19 '20 at 11:16
  • This was the answer I was looking for, crisp and to the point. Thank you @Yksisarvinen – Kunal Saraf Oct 19 '20 at 11:20

3 Answers3

4

Case I falls under static memory allocation domain and Case II falls under dynamic memory allocation domain.

This assumption is wrong. The non-standard feature you are using with such a snippet;

int n;

// determin n at runtime ...

int a[n];

is called VLA (variable length array) (see this thread for more details) and is a contentious way of hiding memory allocation (possibly on the stack, see @André's comment) and eventually clean-up behind a convenient syntax.

Note that without the non-standard VLA extension, you will not be able to use arrays from stack space when the maximum array dimension is not know at compile time. Working example:

#include <array>

constexpr std::size_t N = 42; // known at compile time

std::array<int, N> data; // allocated on the stack
lubgr
  • 37,368
  • 3
  • 66
  • 117
  • 2
    You should probably say "when the *maximum* array dimension is not known". It's very common to statically allocate the max and use only what's needed. – John Zwinck Oct 19 '20 at 11:03
  • 1
    Slight nitpick: A vla does not "hide dynamic memory allocation". With a VLA, it is typically somewhere on the stack and resembles more a variable-sized stack segment. See for example: https://stackoverflow.com/q/31199566/417197 – André Oct 19 '20 at 11:04
2

The case 1 does not do a "static" memory allocation, rather it's memory allocation "on stack". It's a variable length array.

There are multiple reasons:

  • Variable length arrays is a compiler extension. They are not part of C++.

  • There is no error handling with variable length arrays. It's impossible to forward to the user any meaningful error message and it's very hard to debug such programs. Typically the process will just show an unfriendly "segmentation fault" error message.

  • The maximum allocated memory will be only very, very small and will depend on other parts of the code (making debugging really hard). Mostly linux has stack limit set to 8Mb. Allocating more will not error, but rather the process will receive a segmentation fault signal when writing to a memory location past that threshold. You can always set a bigger stack limit for you process.

  • The memory has to be freed at the end of the block. It's not possible to return such memory from a function and use it outside of it's scope, which makes it useless for most applications where dynamic memory is used.

KamilCuk
  • 120,984
  • 8
  • 59
  • 111
  • Re “Operating system has harder time managing the allocated memory”: What? The operating system does not care. Pages in the stack are virtual memory just like other pages. They can be individually swapped to disk or not. – Eric Postpischil Oct 19 '20 at 11:24
  • `Pages in the stack are virtual memory just like other pages` I did not know that. – KamilCuk Oct 19 '20 at 11:25
  • 1
    Okay, then here’s a fun fact. Memory managed can be used to guard against some address/pointer mistakes in the stack. The stack might be 8 MiB, but the stack pointer might be only 1 MiB into it so far, and the system might know 8 MiB of virtual address space is allocated but have mapped only the 1 MiB used portion so far. When the process tries a memory access beyond the 1 MiB, it causes a hardware trap and the operating system can look at it to decide what to do. If it is a read access, the operating system can say that is a mistake, that memory has not been initialized,… – Eric Postpischil Oct 19 '20 at 11:29
  • 1
    … and it can refuse to map the memory and deliver a signal to the process. If it is a write access, the operating system can look at where it is. If it is just a little beyond the 1 MiB, the system can say, okay, you are growing the stack, I will map more memory and let the process continue. If it is a lot beyond the 1 MiB, the system can say, whoa, that’s a strange jump, you must have made a mistake, I will not map the memory but will send the process a signal. – Eric Postpischil Oct 19 '20 at 11:30
  • 1
    VAX/VMS used to have the latter feature: If you tried to jump too far while using the stack, instead of growing stack frames in “normal” amounts, the process would crash. This became a problem in supporting variable-length arrays where somebody tried to do a large array on the stack and start writing to some part of it. The compiler(s) had to be modified so that, when a large variable-length-array were created, the compiler generated a token write access to one element in each page, to cause the stack to grow at a pace the operating system would accept. – Eric Postpischil Oct 19 '20 at 11:33
  • Typo: “Memory managed” three comments above should be “Memory management.” – Eric Postpischil Oct 19 '20 at 11:48
  • Upon reflection, I think it was DEC’s Ultrix, not VAX/VMS. – Eric Postpischil Oct 19 '20 at 21:07
0

As @lubgr explained, it is not possible to allocate static memory(in the stack) that is not determined at the compile time. so if you want to determine memory at runtime, you should use dynamic memory allocation(Heap).

Furthermore, as @Jeff Hill explained in Heap vs Stack post, Heap size is dynamic at runtime, while stack size is static(So even if it was possible to allocate runtime variable memory in Stack, then sometimes your application faced Stack overflow).

Another different is speed; the Stack is faster than the Heap(because of their access pattern)

SpongeBob
  • 383
  • 3
  • 16
  • 1
    It’s automatic memory, not static, and “not possible” should be “not supported by the C++ standard.” It is possible by way of compiler extension, when using compilers that support it. Also, “heap” is a misnomer; the memory structures used to manage dynamic memory are not necessarily heaps. – Eric Postpischil Oct 19 '20 at 11:36