-3

from what I know every variable that i create is stored in the memory (ram or pagefile idc).

so when I store a variable in a particular virtual address, it will be actually stored somewhere in the real memory. From what i understand, the application is not going to clean the stack literally - like going to those addresses and set everything to zero, it will just increment / decrement the stack pointer, and the memory used by another function might be re used later by a different function. And that's the reason when we create a local variable we need to initialize it.

So the application itself doesn't go to those addresses in ram and make it zero again, so my question is who does?, so the next process will be a able to use these exact ram addresses again.

  • 3
    It's totally operating system / implementation specific. And it's totally unrelated to C. – Jabberwocky Oct 13 '17 at 09:57
  • 2
    I don't think anyone needs to set it to 0. When you are done with that memory other programs just overwrite it if required. – qwn Oct 13 '17 at 09:58
  • "Clearing" memory is quite expensive. Doing it for the stack of a process might not be so bad (well, *might* not, depending on stack size... On Linux the default stack size is 8MiB), but how about the heap? What if an application allocates a couple of gigs of heap memory? If that should be "cleared" every time then that will suddenly take up quite a lot of resources as well as quite a lot of time. Even on modern systems. – Some programmer dude Oct 13 '17 at 10:00
  • 1
    And as mentioned, why should it be "cleared"? The physical memory used (whether in RAM or on disk) by a process doesn't cease to exist once the process exits, it will still be there for the next process to use. – Some programmer dude Oct 13 '17 at 10:02
  • @qwn but when the mmu map between a virtual address and a real address it doesn't know/care if it is a stack address or not, just if the real address is free. so who makes the real address free again that another proccess will be able to use it? – ליאב לוי Oct 13 '17 at 10:04
  • 1
    The assumption that stack space *should* be cleared is not accurate. There is no point to that, it simply gets overwritten when it is reused. Otherwise a pretty standard bug in a C program, local variables have a random value when they are not initialized. – Hans Passant Oct 13 '17 at 10:04
  • 2
    "Free" and "used" are just high level conventions, they don't correspond to anything real in hardware. A byte is "free" because you treat it as such, not because it has been zeroed. – harold Oct 13 '17 at 10:07
  • You may want to clear memory after usage for security reasons, when you use some sensitive data, like password, hashes, tokens, etc. But it's up to the app code to clear RAM in such case. Otherwise there's no need to clear RAM, the next app will simply overwrite it with its own values. That said, there are languages which guarantee that newly allocated memory is zeroed (like Java), then it's up to that language interpreter/compiler to allocate memory from OS, and clear it ahead of it's use (or the OS itself may have also cleared memory allocation service, for the performance penalty). – Ped7g Oct 13 '17 at 10:08
  • @Someprogrammerdude but at the end, the "guy" who decides where to store all the variables, heap, stack, is the mmu. And when an address cant be reused because it has a value in it, we have a memory leak. for an example we free a variable allocated on heap because we want the mmu to be able to use these ram addresses again, don't we? – ליאב לוי Oct 13 '17 at 10:09
  • It is not the job of the MMU, but of the OS which has a defined memory layout depending on hardware and OS. – Tony Tannous Oct 13 '17 at 10:10
  • 1
    I think you need to find some books on how computers and memory work, because that's not how it works. You say memory can't be used because "it has a value in it", but zero is *also* a value, a value like any other value. It's just that all bits are zero instead of only some (or none) of the bits are. Why should the value zero be different from any other value? It's the operating system that decides which parts of the (virtual) memory is used or not (possibly though page-tables), it doesn't depend on values. – Some programmer dude Oct 13 '17 at 10:11
  • @Ped7g but when a different program wants to create a variable for an example, the mmu won't overwrite an address, it will store it in a free ram address, which isnt used by another proccess. So how should the mmu know wheater the address is ready to used by a proccess or not? – ליאב לוי Oct 13 '17 at 10:13
  • The OS keeps track which physical memory and virtual space was assigned to the process. Once the process terminates, the OS will modify it's internal data about memory usage (both physical and virtual), so it will be capable to allocate the same memory to other processes in the future. But that's in no way connected to the content of that memory, whatever is written there may stay there, OS doesn't care what values are stored in unused memory. – Ped7g Oct 13 '17 at 10:14
  • @Someprogrammerdude that's exactly what i ask. how can the os know which ram address is ready to be used again or not? I assumed that he is doing that by checking wheater the address's value is "zero" but its probably not true. so how does he know? – ליאב לוי Oct 13 '17 at 10:15
  • 2
    The answer to your question can cover two/three chapters. Take a good Operating Systems book and read it. Pay attention to memory management and virtual memory chapters. – Tony Tannous Oct 13 '17 at 10:16
  • @Ped7g so if the os is doing it anyway, why do we need to free a heap address? why do we need garbage collectors? why do we have a memory leak? – ליאב לוי Oct 13 '17 at 10:17
  • It has it's own memory manager with some internal data structures, holding probably some pointers/ranges of used/unused memory, etc... Sources of some OS like BSD and linux are public, so you can check yourself (don't expect anything trivial, would take probably few days just to read through first time and find out the major parts of it and understand how it works). – Ped7g Oct 13 '17 at 10:17
  • 1
    I don't know about you, but I don't need garbage collector, actually I hate environments when one is enforced upon my code, as it usually hampers performance and doesn't help with anything. Memory leak with modern OS means that you are leaking within your process space (eventually exhausting free memory of OS, then you will affect performance of others, but nothing more). Any decent modern OS is capable to kill process and clean-up its internal OS data about it to reclaim all resources. Although it took some time, for example the obsolete windows used to run out of file descriptors... – Ped7g Oct 13 '17 at 10:21
  • That said, if your process doesn't terminate (some kind of app like steam client SW, running in background), and it keeps allocating new memory all the time, it will eventually exhaust free memory of OS, affecting performance of whole machine and other processes. So if you are writing application, you should make sure you use and re-use your memory properly, to not grow its usage infinitely. But what is your strategy for re-use, whether you `free` it in your heap allocator and then allocate it again, or you literally reuse the just-once allocated memory... depends on programmer (and skills). – Ped7g Oct 13 '17 at 10:25
  • Practical example: you have app, which shows clock on desktop. Upon start you allocate enough memory for the image to be created, then for the remaining eternity you draw your clock into that allocated space, and send it to desktop API to draw it, no more allocation, no memory leaks, your app is fine. Other app will change background pictures of desktop. Once upon 10min timer it will pick next pic file, read+decode it (allocating new image memory), set it to desktop. Now if this one does not release the old memory from previous image, it will fill up whole available RAM after few days. – Ped7g Oct 13 '17 at 10:29
  • @Ped7g so the point of garbage collectors and freeing a variable is just to not use too much space in run time? which means that when i allocate memory with malloc and doesn't free it, it won't affect the next process? – ליאב לוי Oct 13 '17 at 10:29
  • Yes, it will not directly affect other processes. It may indirectly by exhausting available resources on the machine, at which point users usually get angry, check the task manager, and kill the offending process which is exhausting their memory. Same goes with other kind of resources (that's for example problem with some Java programmers, the GC gets them used to not care, and then they leak the non-memory resources too (like files, or whatever else), but those are not collected by the VM, so they will eventually exhaust the OS and angry the users). – Ped7g Oct 13 '17 at 10:31
  • 1
    @qwn the OS will actually zero out a page when it gives that page to another process, in order to not leak any information from the old process. That's why [`malloc`+`memset` is slower than `calloc`](https://stackoverflow.com/q/2688466/995714) – phuclv Oct 16 '17 at 02:36
  • @LưuVĩnhPhúc So the CPU fills in random data when you malloc? that does not seem right. – qwn Oct 16 '17 at 09:55
  • @qwn no, it contains [uninitialized data](https://stackoverflow.com/q/17444525/995714), just like a variable on stack after declaration – phuclv Oct 16 '17 at 10:24

4 Answers4

8

Within the same program, typically nobody cleans up. Newly allocated memory on the heap, and new local variables on the stack, can contain old data (if not otherwise initialized). If you are not careful to initialize things it could cause intermittent bugs or be expoited by hackers to reveal "secret" data.

When you start a new program, it should be the responsibility of the OS to clear memory. That is typically built into the paging system: as you request pages you should get zeroed ones. But the details vary a lot between operating systems.

NickJH
  • 561
  • 3
  • 7
  • 1
    I think in current era due to security concerns it may be probably safer if the OS does trash the memory content upon process termination (the price of clear is with modern computer performance not that high on common home machines, yet sensitive data leakage can be costly), but generally speaking the OS doesn't need to clear memory, there's no point to it. – Ped7g Oct 13 '17 at 10:11
  • so which part of the os does it? and if the os is freeing the memory which was used by a program, why do we need to waist time on a garbage collector? or to free the memory by our selves? – ליאב לוי Oct 13 '17 at 10:22
  • 1
    @ליאבלוי We need a garbage collector so we can find out what memory is currently unused and reuse it in the same program. If you don't want to re-use memory in the same program, you don't need a garbage collector or any other kind of sophisticated memory management. – fuz Oct 13 '17 at 10:39
  • I'd point out that depending how much memory a program used, and what metric defines "too slow", zeroing out large amounts of memory is slow by many metrics. One has to remember to fully zero out the memory all of the writes need to both clear the values in the caches and got out to DRAM. There are scenarios where such a performance hit could be undesirable. – vxWizard Oct 13 '17 at 18:47
  • 1
    @vxWizard Depending on the CPU architecture one may be able to create a special page of all zeros and map the same page multiple times into the process. The page would be set as read only. An attempt to write would cause a fault that could then be used to generate a new r/w page that would then be filled with all zeroes and then the operation would be done again. This would mean you are only zeroing the pages on demand. If the data that gets stored in that memory region is sparse it would cut down to time to zero initialize. – Michael Petch Oct 13 '17 at 21:52
1

After that lengthy discussion in comments, I think some summary answer makes actually sense to this question (also I think the question is NOT that bad - to deserve downvotes, after all you are asking about particular programming concept, just misunderstanding it a bit, probably that annoys people, but for me it looks like question about programming).

First thing, the OS keeps track of used/free memory in its internal structures, storing rather things like pointers/ranges of addresses, working with "pages" of memory rather than single bytes. So the actual content of memory is of no interest of OS, if memory at physical addresses range 0x10000-0x1FFFF is being tracked in OS internal data as "free", it's free. The content of bytes doesn't matter. If that memory area is claimed by some process, the OS does track that in its internal data, so upon termination of that process it will mark that area as "free", even if the process explicitly didn't manage to release it before termination.

Actually OS usually does not clear memory upon allocation request, due to performance reasons (although I *guess* some security-hardened OS may actually clear RAM after every terminated process, just to make sure nothing malicious or sensitive will leak in future to next process reusing the same physical memory). If the app was programmed in language which guarantees the newly allocated memory is cleared, that that's the responsibility of that language runtime to provide that.

For example C and C++ does not guarantee zeroed memory (again performance reasons, clearing takes time), but they have the heap-memory manager code in the libc runtime code, added to every application compiled from C sources and using default libraries and runtime. The heap manager allocates free memory from OS in bigger chunks, and then it micro-manages it for the user code, supporting new/delete/malloc/free, which actually don't go directly to OS memory manager, that's what the internal C runtime will do when it will exhaust its current pool of available memory.

So there's no need to zero values to reclaim the memory for OS, it has to "zero" only its internal data about which parts of RAM are being used and by which process.

This OS memory manager code is probably not trivial (I never bothered to check actual implementation, but if you are really into it, get some book about OS architecture, and you may also study sources of current operating systems), but I guess in principle upon booting up it maps the available physical memory, cuts it into different areas (some are off-limit to user code, some ranges are memory-mapped I/O devices, so they may be off-limit to everyone except the particular driver of the device, and usually the biggest chunk is "free" memory for user applications), and keeps something like list of available memory "pages" or at whatever granularity the OS wants to manage it.

So who cleans up RAM (and other resources) - the OS, upon terminating certain process, and good OS should be designed in such way, that it will be capable to detect all blocked resources by that terminating process, and reclaim them back (without cooperation from the process code itself). With older operating systems it was not uncommon that this part was a bit flawed, and the OS kept running out of certain type of resources over time, requiring periodical reboots, but any solid OS (like most of the UNIX family ones) can run for years without intervention or leaking anything.

Why do we have garbage collectors and other means for memory management:

Because as a programmer of application you decide how many resources the app will use. If your app will run continuously in background and allocate new resources all the time, without releasing them, it will eventually exhaust the available resources of the OS, affecting the performance of whole machine.

But usually when you write application, you don't want to micro-manage memory, so if you allocate at one place 100 bytes, and at other place another 100 bytes, then you don't need them any more, but you need 200 bytes, you probably don't want to write complex code to reuse the discontinued 100+100 bytes from previous allocations, in most of the programming languages it is simpler to let their memory manager collect those earlier allocations (example: in C/C++ by free/delete, unless you use your own memory allocator or garbage collector, in Java you simply drop all known references to the instances, and GC will figure out that memory is not needed by code and reclaim it), and allocate brand new 200 byte chunk.

So the memory managers and GC are convenience to programmer, to make it simpler to write common applications, which need to allocate only reasonable amount of memory and will release it back in timely manner.

Once you would work on some complex software, like for example computer game, then you need lot more skills, planning and care, because then performance matters and such naive approach to just allocate small chunks of memory as needed would end badly.

For example imagine particle system allocating/freeing memory for each particle, while the game emits thousands of them per minute, and they live just few seconds, that would lead to very fragmented memory manager state, which may lead to its collapse when the app will suddenly ask for large chunk of memory (or it will ask OS for another one, slowly growing memory usage over time, then the game will crash after few hours of playing because the free memory from OS exhausted). In such cases the programmer has to dig down into micro-managing its memory, for example allocating just once for the total lifetime of game process the big buffer for 10k particles, and keeping track itself which slots are used and which are freed, and handling situations gracefully when app request 10+k particles at the same time.

Another layer of hidden complexity (from programmer) is the OS capable to "swap" memory to disk. The app code is not aware that particular virtual memory address leads to non-existing physical memory, which is caught by OS, which knows that that piece of memory is actually stored on disk, so it does find some other free memory page (or swaps out some other page), reads the content from disk back, remaps that virtual address to the new physical memory address, and returns the control back to the process code, which tried to access those values (now available). If this sounds like terribly slow process, it is, that's why everything on the PC "crawls" when you exhaust free memory and the OS starts swapping memory to disk.

And that said, if you will ever program something which manipulates with sensitive values, *you* should clear unused memory after yourself, so it will not leak to some future process, which will receive from OS the same physical memory, after your process releases it (or terminates) (and in such case it's better to "clear" memory by trashing it with random values, as sometimes even amount of zeroes can leak some minor hint to the attacker, like how long was the encryption key, while random content is random = no info, as long as the RNG has enough entropy). Also it's good to know details about the particular language memory allocator, so you can in such applications use for example special allocator to guarantee the memory with sensitive data is not swappable to disk (as then the sensitive data end stored on disk in case of swapping), or for example in Java you don't use String for sensitive data, because you know the Strings in Java have their own memory pool, and they are immutable, so if anyone manages to check content of your VM's string memory pool, it can read basically all String you ever used in your running app (I think there's some GC possible upon String pool too, if you exhaust it, but it's not being done by ordinary GC, ordinary GC reclaims just object instances, not String data). So in such cases you should as programmer to go great lengths to ensure you actually destroy the values themselves in the memory, when you don't need them any more, just releasing the memory is not enough.


when does the OS initilize the data for an example - when creating a variable or when the process starts to run?

Do you realize the code itself needs memory? The OS does load the executable from disk into memory, at first some fixed part of it where there are meta data stored, telling the OS how much of executable to load further (code + pre-initialized data), the size of the various sections (i.e. how much memory to allocate for data+bss sections), and suggested stack size. The binary data are loaded from the file into memory, so it effectively sets values of memory. Then the OS does prepare the runtime environment for the process, i.e. creates some virtual address space, sets up various sections, set access rights (like code part is read-only, and data is no-exec, if the OS is designed that way), meanwhile it keeps all this info in its internal structures, so it can later terminate process at will. Finally when the runtime environment is ready, it jumps to the entry point of code in user mode.

If it was some C/C++ application with default standard libraries, it will further adjust the environment to make the C runtime initialized, i.e. it will probably straight away allocate some basic heap memory from OS and set up the C memory allocator, prepare the stdin/stdout/stderr streams and connect to other OS services as needed, and finally call the main(...) passing the argc/argv alongside. But things like global variables int x = 123; were already part of binary, and loaded by OS, only the more dynamic things are initialized by libc upon start of app.

So the OS did allocate like for example 8MiB of RAM for code + data, and set up the virtual space. Since then it has no idea, what is the code of app doing (as long as it does not trigger some guardian, like accessing invalid memory, or doesn't call some OS service). OS doesn't have any idea whether the app did create some variable, or allocate some local variables on stack (at most it will notice when the stack grows outside of original allocated space by catching invalid memory access, at that point it may either crash the app, or it may remap more physical memory to the virtual address area where stack is [out]growing and make the app to continue with new stack memory available).

All the variables initializations/etc either happened while the binary was loaded (by OS), or they are fully under control of the app code.

If you call new in C++, it will call the C runtime (that code is appended to your app when building executable), which will either provide you with memory from the already set up memory pool, or if it did run out of spare memory, it will call the OS heap allocation for some big chunk, which is then again managed by the C memory allocator from clib. Not every new/delete does call OS, only very few of them, the micro management of that is done by the C runtime library, stored in the executable (or loaded dynamically from .DLL/.so files as needed through OS service to load dynamic code).

Just like JVM is actual application code, doing all sorts of housekeeping too, implementing also GC code, etc... And JVM must clear the allocated memory before passing it to Java's new in the .class code, because that's how the Java language is defined. OS again has no idea what is going on, it doesn't even know that that app is virtual machine interpreting some java bytecodes from .class files, it's simply some process which did start and runs (and asks OS services as it wants).

You have to understand that the context switch of CPU mode between user mode and kernel mode is quite expensive operation. So if the app would call OS service every time some tiny amount of memory is modified, the performance would go down a lot. As modern computers have plenty of RAM, it's easier to provide the starting process with some 10-200MB area (based on meta data from executable) to start with, and let it handle situations when it needs more dynamically. But any reasonable app will minimize the OS services calls, that's why the clib has it's own memory manager and does not use the OS heap allocator for every new (also the OS allocator may work with granularity which is unusable by common code, like for example allowing to allocate memory only in MiB chunks, etc.).

In high level languages like C/Java you have quite big part of app code provided by the standard libraries, so if you just started to learn that language and you didn't think how it works internally on machine code level, you may take all that functionality somehow for granted, or provided by OS. It's not, the OS provides only very basic and bare services, rest of the C/Java environment is provided by code which was linked into your application from standard libraries. If you create some "hello world" example in C, usually 90% of binary size is C runtime, and only few bytes are actually produced by you from that hello world source. And when you execute it, there are thousands of instructions executed (inside your process, from your binary, not counting the OS loader) before your main(...) is even called.

Ped7g
  • 16,236
  • 3
  • 26
  • 63
  • Taking a glance over it afterwards, I didn't even focus on stack memory specifics (like you don't need to release it explicitly, because it gets released implicitly upon going "up" in the code flow), one could go rambling about this probably for another 10-20 pages of text, but I hope this is enough for the OP to get the idea how it works. – Ped7g Oct 13 '17 at 11:37
  • thanks for the great answer! just one last question. when a process starts to run, does the OS take a part of the ram and gives it to the process at the beginning? because if he does, it doesn't make sense to me. I always thought that the OS gives the process a place in ram only when he actually creating variables- and don't give at the beginning for no reason. If i'm right - and the OS gives a process a space in ram when it creates a variable, does he check if the logical address is on data segment for an example, and zero the memory - when creating a variable – ליאב לוי Oct 13 '17 at 17:29
  • in short - when does the OS initilize the data for an example - when creating a variable or when the process starts to run – ליאב לוי Oct 13 '17 at 17:29
  • so which part of the OS is allocating memory for a process, the loader? Are you saying that when a i open an exe file, the loader takes the data from the disk, metadeta, actual instructions, loads only the instructions into memory, allocating basic memory space for the process = giving it a space in ram which no other process can use and after that changing the process's state to Ready, and when the scheduler decides, start executing the program in the CPU, instruction by instruction? so on creating a global variable, the physical address which will be used to store him, is already initialized – ליאב לוי Oct 13 '17 at 18:55
  • which part of OS - I don't know, depends which OS. I'm pretty sure modern OS will be quite modularized, so while the execution of binary starts by calling some main OS service like [`exec`](http://man7.org/linux/man-pages/man3/exec.3.html), how that one is implemented IDK, the allocation of memory is surely handled by some memory allocator, but that's probably called from inside the binary loader code, which is probably called in setup phase from exec, and plus it will very likely involve several other parts of OS to set up environment. Virtual address of global is defined during linking. – Ped7g Oct 13 '17 at 19:02
  • And the loader loads instructions + pre-initialized data (`.data` section), or anything else what the particular OS and executable format defines (for example both windows and NIX systems have mechanism to make executable depend on dynamically loaded libraries (DLL or SO files), which may be loaded + re-linked by the OS loader (or the app code may hide them from the meta data, and load them later during runtime by calling appropriate OS services). Also modern binaries may be signed by key (especially drivers or installers), OS may check those meta data too. – Ped7g Oct 13 '17 at 19:08
  • So its the loader who loading the data section to the memory? And another little question, is the stack size of every process equal? Or the os change it when needed? Lets say my program is really small, 1 local variable. Is the os able to fit the stack size? – ליאב לוי Oct 16 '17 at 04:30
  • depends on OS and its implementation. IIRC some are capable to grow stack dynamically by catching invalid memory access upon hitting old limit, other may require the binaries to contain meta data about requested stack size. If you some minimal code for such OS, I guess it may give you just 1 page (4kB usually) for stack space. – Ped7g Oct 16 '17 at 06:49
  • when you said that "like you don't need to release it explicitly, because it gets released implicitly upon going "up" in the code flow", what do you mean here, how can the OS free my stack when it is still in use? The stack is a part of the minimal space the OS gives to every process, which will serve the process until he terminates. So why would the OS touch that particular address and make it available to other processes when our's is still running/ready/alive – ליאב לוי Oct 21 '17 at 06:36
  • Each process has its own stack. The OS will not release any of it probably (maybe in some extreme case). But when in C/C++ you use local variables, usually they are implemented by using stack space, and they are "released" at end of current scope implicitly, just by restoring stack to original value (calling destructors as needed in C++). So as long as your code flow doesn't have infinite cycle adding infinite amount of calls to the stack, and it goes "up" to top levels, the stack is returning back to its starting size ("used" memory, "reserved" stays) & you can reuse it in further code flow. – Ped7g Oct 21 '17 at 09:14
1

It would be the entity that controls the memory (technically during and) between one application and another. So that is ideally the OS. It is certainly possible to put this on the application but you would have security concerns.

Within the application it is somewhat trivial to see that a cleaning per function doesnt happen as you pointed out.

The big named operating systems we are used to that with the processor (attempt to) protect one application from another from each others space, are going to create a stack per application/thread/whatever and not one big universal stack space every shares. The whole of the memory provided to that application on launch, could be initialized to some value, not necessarily 0x00 nor 0xFF before branching to the application. The .text, .data, .bss and other notions of sections of memory/space are initialized per the rules of that language/implementation, the rest of the space may or may not be. But in an environment where applications are not trusted and the OS is the OS which does the loading and launching anyway would be the entity that does this clearing/cleaning either on launch or on exit of an application (or actually any time a chunk of memory is allocated or deallocated for that application which could be runtime as well).

old_timer
  • 69,149
  • 8
  • 89
  • 168
  • why you are saying doesn't make sense to me. lets assume we have a little program which does almost nothing. You want to tell me that the OS will take a part of the ram - doesn't have to be sequent, and will "give" this to the process? I always thought that the OS gives the process a place in ram only when the process needs it - making a variable, or of course when the loader puts the code in he ram in order to start execute the proccess – ליאב לוי Oct 13 '17 at 17:19
  • when the os loads the program no matter how larger or small it has to allocate some memory for that program to live and run in. the os shouldnt have any knowledge of variables, nor the processor either that is a high level language programming concept that gets compiled into loadable blobs for the os. If the application then does mallocs and frees at runtime then if security concious it would need to clean on free or before the malloc if it is a space not already used by that application (so the next app that mallocs or frees doesnt get that space and see the stale data). – old_timer Oct 13 '17 at 18:48
1

I take it that you are relatively new to computers. What you are missing is the large topic of logical and virtual memory translation.

Your process memory is organized into logical pages. Those logical pages might map to physical page frames.

A physical page frame that was in use by one process (let's say it completes) will likely be reused by another process. To be reused, the physical page frame must be mapped to a logical page. A lot of different things can happen during that mapping that MIGHT change the values of the memory in that page.

It is possible that two processes want to share pages. In that case, the memory is not altered at all.

It is possible that the page frame will be mapped to a logical page that had been stored in a page file. In that case the logical page will be loaded from the page file.

It is possible that the page will have no data associated with it so the operating system will clear the page before mapping. That clearing may be zero or to some other value.

The AIX O/S used to like to clear data to the value DEADBEAF.

In conclusion, the answer to your question depends upon how the operating system maps the physical page frame to a logical page in your process.

user3344003
  • 20,574
  • 3
  • 26
  • 62
  • i know that we have a virtual addresses which are translated to physical addresses. What i'v missed is that the os memory management is freeing the memory which used by a proccess when he is ending, and the point of garbage collector – ליאב לוי Oct 13 '17 at 17:03
  • my whole mistake occured because i thought that the point of the garbage collector is to free the memory which used by the process so the next process will be able to use this memory again, which is not the point of garbage collector at all - this is the mission of the OS memory management's applications- i don't know how they work i just didn't know they are exist. – ליאב לוי Oct 13 '17 at 17:10
  • @ליאבלוי properly releasing your memory within app (GC or using particular release call of your memory manager) gives your memory manager chance to consolidate allocated memory and release the OS allocated parts which are no more in use. With Java the GC may reshuffle whole memory to remove last few bits of used memory from some almost-empty chunk, with C++ it would be probably quite rare to hit situation when the app did release all memory from particular chunk allocated from OS. So you sort of *do* free memory for others, but more effective is to not eat whole of it in first place (=reuse). – Ped7g Oct 13 '17 at 19:17
  • The process you are talking about is not really a "garbage collector." Normally, a garbage collector is a service that identifies dynamic memory within the process address space that is no longer being used. What you are describing now is the process termination code in the operating system. When a process exits, the OS has to free the page tables and release the number referenced by the process. – user3344003 Oct 15 '17 at 03:26