0

Hello I have seen this Maximum size of local array variable but I want to know why that's ok if the array is set ion global and not ok if it comes after the main.

And another question in relation : Is it a good practice to have big memory objects that are defined in a.cpp and declared in a.hpp with extern ? Or better working with big memory defined in local fonction but defined as vector or new or malloc and passing them in fonction arguments.

It's see my experience that are those questions i have to resolve...

Thank You

#include <iostream>
using namespace std;

#define N (10000000000000)
int sd[N];

int main() {
  // int sd[N];
  return 0;
}
Community
  • 1
  • 1
  • When you have extremely big or unknown size arrays, you want to utilize the heap. – amanuel2 Oct 22 '16 at 20:21
  • 2
    Use a `std:vector` instead. – πάντα ῥεῖ Oct 22 '16 at 20:23
  • Your local variables for the routine are allocated on the stack, which is expected to be relatively small. Global variables (oversimplification) are allocated on the heap, which is a much larger space by default. Your variables can also be allocated using the heap space within a local routine through the use of new() or malloc(), but the pointer used to reference those variables can go out of scope when the function exits, causing memory leaks, unless you de-allocate the memory first using delete() or free(). – mikeTronix Oct 22 '16 at 20:30
  • so if a prefer using a int *t=new in[10000000000000] in the main(), like I understand, maybe the t will point on another region memory when the function exits (fonction like f(int *t,size_t s)). To be sure there won't be a possible problem the use of int * const t=new in[10000000000000] with the fonction f(int * const t,size_t s) would be 100% robust ? – user7058377 Oct 23 '16 at 10:45
  • correction : so if a prefer using a **static** int *t=new in[10000000000000] ... (with static it's ok) – user7058377 Oct 23 '16 at 10:52

1 Answers1

0

Declared at global scope:

int sd[N];

int main() {
  return 0;
}

And the resulting binary just gets real big. When the process is loaded into memory, the entire set of global memory data is mapped in.

Declared within a function:

int main() {
  int sd[N];
  return 0;
}

And the memory is allocated as soon as the function as invoked - and it's allocated from the stack. The stack memory for a thread is usually initialized low, closer to a megabyte or less. Once the stack memory runs out, it's game over.

As others have pointed out in the comments, the right way to allocate a LARGE array is dynamically, using heap memory, which is typically plentiful.

int main() {
  int* sd = new int[N];

  ...

  delete [] sd; // free the allocated memory
  return 0;
}

Even better, so you don't have to remember the delete:

int main() {
  std::vector<int> sd(N);

  ...

}
selbie
  • 100,020
  • 15
  • 103
  • 173
  • http://stackoverflow.com/questions/40202324/c-vector-appear-to-be-faster-than-c-array-time-why – user7058377 Oct 23 '16 at 11:25
  • Why vector are most faster than c array ? – user7058377 Oct 23 '16 at 11:26
  • @user7058377: A container is not slow or fast. Individual operations are slow or fast. And you only care about speed when and **if** you have actually *experienced* that something is too slow. Until then, you use the correct tool. `std::vector` is most certainly the correct tool here. – Christian Hackl Oct 23 '16 at 11:57