1

I have been thinking about this question but haven't find any explanation yet.

What module decides that how much memory should be assigned to, lets say, a C++ program? Is it the OS who decides on the recommendation of the compiler? ..linker?

And what is the ratio of stack and heap in the allocated memory?

jogojapan
  • 68,383
  • 11
  • 101
  • 131
RAB
  • 75
  • 1
  • 9

4 Answers4

3

The answer is different for different OSes. Typically the executable contains a desired stack size for the main thread, put there by the linker, which might be overridden by OS settings. OS settings can be configured in one or more ways, possibly per-user. Some OSes don't require a stack size to be specified up front, they can add stack as it's used, more or less indefinitely (until a hard limit is reached or the system runs out of free memory). Those that do require a size up front might initially only allocate address space rather than memory, and map addresses to memory if and when the stack reaches that far.

Heap typically is not allocated up front, so there's no "ratio of stack and heap". Total memory allocated to a process may or may not be restricted -- if not then it can go as high as system resources allow, or on a 32 bit system might be restricted by the available address space.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
1

It's not the sort of issue that's in the C++ standard. It's compiler and OS dependent.

For an example of the kind of thing a linker produces that an OS gets to factor in determining the resources requested by a program, see:

http://en.wikipedia.org/wiki/Executable_and_Linkable_Format#ELF_file_layout

In some circumstances there are APIs to specifically request resources from the OS:

Change stack size for a C++ application in Linux during compilation with GNU compiler

There are also ways to tell the OS to set quotas and limits in some environments:

https://stackoverflow.com/questions/4983120/limit-memory-usage-for-a-single-linux-process

Set Windows process (or user) memory limit

If you want to do an empirical study of how a certain OS is managing resource usage, you might get a better sense of it with a process monitor utility as opposed to looking for documentation...esp. with a closed-source OS.

Community
  • 1
  • 1
1

Depends on you program and the OS. Typically, on start up only enough memory is allocated to hold the executable, any read only data, and usually around 4k for stack. Then, when you call malloc or new to allocate memory you'll get virtual memory space without any physical memory backing it up. This is called lazy allocation, and the memory will only get physically allocated when you actually write to it.

Compile and time the following to get an idea of what I'm talking about:

//justwrites.c
#include <stdlib.h>

int main(int argc, char **argv) {

int *big = calloc(sizeof(int),19531); // number of writes

return 0;
}

// deadbeef.c
#include <stdlib.h>

int main(int argc, char **argv) {

int *big = malloc(sizeof(int)*20000000); // allocate 8 million bytes
// immediately write to each page to simulate all at once allocation
// assuming 4k page size on 32bit machine
for ( int* end = big + 20000000; big < end; big+=1024 ) *big = 0xDEADBEEF ;

return 0;
}

// bigmalloc.c 
#include <stdlib.h>

int main(int argc, char **argv) {
int *big = malloc(sizeof(int)*20000000); // allocate 80 million bytes
return 0;
}
Robert S. Barnes
  • 39,711
  • 30
  • 131
  • 179
  • Can optimistic memory allocation be called "typical" given that Windows (AFAIK) doesn't do it? – Steve Jessop Apr 24 '12 at 13:33
  • @Steve: MSVC++ doesn't. Windows does support the same underlying method that Linux/libc `malloc` uses (i.e. `CreateFileMapping` without a file handle, and no special options) and other Windows compilers could definitely implement `malloc` that way. – MSalters Apr 24 '12 at 14:35
  • @MSalters: true, but if you don't have an OOM killer then the technique isn't so good. And you probably don't want a compiler sticking a user-mode version of an OOM killer into every app. Do any Windows compilers (including mingw) actually do optimistic allocation, and if so then how successfully? – Steve Jessop Apr 24 '12 at 14:59
  • @SteveJessop: The Windows behavior would be a bit more deterministic: if the promised storage is actually claimed and the swap file cannot be grown, the Structured Exception Handler responsible for paging in would fail and thus the application trying to use the memory would quit with an unhandled exception. No idea on actual compiler implementations, sorry. – MSalters Apr 24 '12 at 22:13
0

At least for 32-bit windows, each process gets its own copy of the address space, 2G user 2G kernel (shared by all processes), the virtual memory subsystem ensures that processes that access the same location get the appropriate data for their process. This is how a program can have the same entry point and be running multiple times while not stepping on data in use by other processes with the same executable.

Applications will continue to use up more of the virtual memory, and the kernel will allocate more physical memory to that process, until something runs out, physical memory, swap space/paging file. You can limit memory that can be used by a process through system calls.

Stack and heap are almost always allocated on opposite ends of available memory, so the stack grows down from the top of available memory while the heap grows up from the bottom (this decision depends on the architecture). This allows them to grow separately so that a program that needs a lot of heap but not much stack can use the same plan as one that needs lots of stack and not much heap.

Paul Rubel
  • 26,632
  • 7
  • 60
  • 80