I'm involved in a C++ coding course and today I was discussing dynamic memory allocation with someone else in the course. He was recounting that in his previous C++ course (which was the first exposure to programming he had), they would build and use arrays using something like this:
#include <iostream>
int main(void) {
int x;
std::cin >> x;
int arr[x];
for (int i = 0; i < x; i++) {
int k;
std::cin >> k;
arr[i] = k;
}
for (int i = 0; i < x; i++) {
std::cout << "The value at arr[" << i << "] is " << arr[i] << std::endl;
}
return 0;
}
I compiled this without receiving any compilation errors using g++ and when I ran it (with small inputs for x) the output was exactly as expected. I changed the code to C and ran it using gcc and got the same result.
Why does this work? We don't know the array size at compile time, but we aren't dynamically allocating any memory and we still can read/write without error. My assumption is that the OS automatically allocates a certain amount of memory to the stack and we are just relying on the fact some of this memory hasn't been used yet, but I'm not totally satisfied. If it's safe, why do we use dynamic allocation for arrays of unknown size? If it isn't safe, why doesn't a compiler give me a warning (or is there a specific flag I need to add to see this warning)? Any insight into what is actually going on here would be greatly appreciated.