1

I've learned that a process has the following structure in memory:

enter image description here

(Image from Operating System Concepts, page 82)

However, it is not clear to me what decides that a process looks like this. I guess processes could (and do?) look different if you have a look at non-standard OS / architectures.

Is this structure decided by the OS? By the compiler of the program? By the computer architecture? A combination of those?

Martin Thoma
  • 124,992
  • 159
  • 614
  • 958
  • I have the same questions for the stack frame - should I ask it in a new question? – Martin Thoma Sep 05 '15 at 12:44
  • Possible duplicate of [Why do stacks typically grow downwards?](https://stackoverflow.com/questions/2035568/why-do-stacks-typically-grow-downwards) – recnac Apr 20 '19 at 12:00

3 Answers3

2

Related and possible duplicate: Why do stacks typically grow downwards?.

On some ISAs (like x86), a downward-growing stack is baked in. (e.g. call decrements SP/ESP/RSP before pushing a return address, and exceptions / interrupts push a return context onto the stack so even if you wrote inefficient code that avoided the call instruction, you can't escape hardware usage of at least the kernel stack, although user-space stacks can do whatever you want.)

On others (like MIPS where there's no implicit stack usage), it's a software convention.


The rest of the layout follows from that: you want as much room as possible for downward stack growth and/or upward heap growth before they collide. (Or allowing you to set larger limits on their growth.)

Depending on the OS and executable file format, the linker may get to choose the layout, like whether text is above or below BSS and read-write data. The OS's program loader must respect where the linker asks for sections to be loaded (at least relative to each other, for executables that support ASLR of their static code/data/BSS). Normally such executables use PC-relative addressing to access static data, so ASLRing the text relative to the data or bss would require runtime fixups (and isn't done).

Or position-dependent executables have all their segments loaded at fixed (virtual) addresses, with only the stack address randomized.

The "heap" isn't normally a real thing, especially in systems with virtual memory so each process can have their own private virtual address space. Normally you have some space reserved for the stack, and everything outside that which isn't already mapped is fair game for malloc (actually its underlying mmap(MAP_ANONYMOUS) system calls) to choose when allocating new pages. But yes even modern glibc's malloc on modern Linux does still use brk() to move the "program break" upward for small allocations, increasing the size of "the heap" the way your diagram shows.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
0

That figure represents a a specific implementation or an idealized one. A process does not necessarily have that structure. On many systems a process looks only somewhat similar to what is in the diagram.

user3344003
  • 20,574
  • 3
  • 26
  • 62
  • You did not answer my question. Please either expand this answer (what decides how the process looks like? Your answer suggests it's only the OS) or remove the answer and add it as a comment. – Martin Thoma Sep 06 '15 at 10:50
  • The problem is your question starts from a false premise. A process does not necessarily have the structure you describe. – user3344003 Sep 09 '15 at 17:30
0

I think this is recommended by some committee and then tools like GCC conform to that recommendation. Binary format defines these segments and operating system and its tools facilitate the process of that format to run on system. lets say ELF is recommended by system V and then adopted by unix; and gcc produce the ELF binaries to be run on unix. so i feel story may start from binary format as it decides about memory mappings(code, data/heap/stack). binary format,among other hacks, defines memory mappings to be mapped for loading programs. As for example ELF defines segments (arrange code in text,data,stack to be loaded in memory), GCC generates that segments of ELF binary while loader loads these segments. operating system also has freedom in adjusting the values of these segments like stack size. These are debatable loud thoughts which I try to consolidate.

incompetent
  • 1,715
  • 18
  • 29