0

I made a struct which needs 1 GB of memory. When I allocated it on the heap the program started fast and I saw in the Application Manager, that it's memory using went up to that amount, but when I allocated it on stack like a simple variable, the application needed much more time to start and at the Application Manager I saw it is not using that amount of memory (just a few KBs). Why is it? Does it mean it's recommended to store a big amount of data in the heap? Is it faster for that case? I know usually allocating memory on the stack is faster because of the mapping and etc, but in this case, it was strange. Can anyone explain me this? Thanks in advance!

trincot
  • 317,000
  • 35
  • 244
  • 286
Distraught
  • 13
  • 5
  • When you allocated it on stack, you should get a stack overflow. Stack is usually 1MB or so... – Jaa-c Sep 11 '18 at 14:52
  • 2
    How exactly did you allocate this "on the stack"? –  Sep 11 '18 at 14:52
  • 1
    That doesn't seem right. What platform are you trying this on? What's the code you've tried? Are you sure your object is actually on the stack? Are you sure it's actually a GB? – François Andrieux Sep 11 '18 at 14:53
  • Jaa-c: The program just started and I got no error. Neil: Because I allocated the object like a simple variable. X y; Francois: I'm using Windows and mingw. – Distraught Sep 11 '18 at 15:07
  • The behavior you describe does not seem possible. Can you show us some code? – Jim Mischel Sep 11 '18 at 17:15

2 Answers2

1

The size of the stack is usually around one to few megabytes by default on typical desktop systems. Probably less on embedded devices.

If you allocate more memory than fits on the stack, the operating system will typically terminate the program as soon as you attempt to access the memory.

Does it mean it's recommended to story big amount of data in the heap?

It is recommended to use the free store (dynamic allocation) for big amount of data, because big amount of data would overflow the stack.

Application Manager i saw it is not using that amount of memory (just few KBs).

Typically, an operating system allocates a page of memory for a process when that memory is accessed. Since your program didn't crash due to stack overflow, I suspect that you never accessed the memory, and therefore no memory was allocated for the data.

eerorika
  • 232,697
  • 12
  • 197
  • 326
  • This answer is correct, but doesn't address the odd behavior described in the question in the case of the large stack allocation. – François Andrieux Sep 11 '18 at 14:55
  • But does it mean that we can only allocate a limited size of variables at the same time because the stack size limit? Okay, i know it can be billions, but there's a limit in that case. – Distraught Sep 12 '18 at 12:48
  • @EricBlack exactly. The amount of memory available to local variables is limited. Much more limited than "billlions". As I said, the size is typically one to few megabytes. Mega is a *million*. Typical size of an `int` is 4. So given a proces swhose stack is a megabyte, you could fit 262144 `int` variables on the stack before you run out of memory. Most typical ways to overflow the stack are: Using big arrays of objects, recursion depth that increases linearly (or worse) in relation to some input. – eerorika Sep 12 '18 at 12:51
1

Yes, it's recommended to make large allocations dynamically - because then you can cope gracefully with failure (obligatory note on terminology).

For example, this:

void might_throw(size_t sz) {
  std::vector<int> v(sz);
  // ...
}

will throw std::bad_alloc if it fails for sufficiently large sz, meaning I have the option to catch the exception and retry with a smaller number. Even if I can't usefully recover, stack unwinding allows my other objects to be cleaned up safely.

Conversely

void will_just_die() {
  int a[SomeEnormousConstant];
  // ...
}

has no recovery mechanism if a can't really be created. The program will just crash, hard, with no stack unwinding or (standard) error handling mechanism.

This may happen immediately, or it may only happen when you actually try to access more of a than could successfully be allocated. If you're very unlucky it might even appear to work but break something else.


The details of how a given allocation shows up externally is very OS-dependent, and I'm not sure what you're using - is Application Manager the OSX one?

It's common for a large dynamic allocation to be mapped directly, in which case it would show up instantly as an increase in virtual size, but might still not be allocated any physical pages except on access.

If the automatic ("stack") allocation is just performing the frame pointer arithmetic and again relying on lazy allocation of physical pages, this won't affect either the virtual or physical size (again, until you try actually accessing that memory).

I don't know why the automatic version would take longer to start though - you'd have to provide an MCVE for which this was actually reproducible, as well as your OS/platform details to get an answer to that.

Useless
  • 64,155
  • 6
  • 88
  • 132
  • *"will throw `std::bad_alloc`*". Nah. It *may* throw `std::bad_alloc`. On systems that overcommit memory, it might not throw anything despite allocating beyond available physical memory. Rather, memory is only allocated when it is first accessed, and if no memory (even swap) is available, the system terminates the process. i don't think it's possible to attempt recovery from over-allocation on such systems, unfortunately. – eerorika Sep 11 '18 at 15:08
  • It's still more likely to throw (and give you the chance of recovery) than the automatic version - and anyway, the OOM killer might terminate a different process and give you the pages after all. – Useless Sep 11 '18 at 15:14