In a 32-bit machine each process gets a 4GB virtual space. In this case one can worry that we might face trouble due to fragmentation. But in the case of a 64-bit machine we theoretically have a huge addressable virtual memory, so why is memory fragmentation still an issue (if it is) in a 64-bit machine?
-
2It is not an issue on 64-bit operating systems. – Hans Passant Dec 17 '11 at 09:55
-
Shouldn't this comment be an answer? Existing answers seem to imply it IS a problem in 64bit – paulm Aug 31 '16 at 13:45
2 Answers
Each virtual address that you try to access is mapped by the operating system to physical memory. Physical memory is allocated in pages (e.g. 4K in size). If you manage to allocate a byte at offset 1000000*n and do it for n from 1 to 1000000 (I think you could do that with mmap), then the OS will have to back that with a million pages of physical memory, which is something like 4G. That physical memory will not be available for anything else. If you had allocated the bytes contiguously, you'd only need about 1M of physical memory (256 pages) for your million bytes.
You can get in a similar bad situation if you allocate 4G for legitimate reasons, and then deallocate parts of it, keeping a bit of every page allocated. The OS will not be able to actually reuse the freed memory for anything else because there are no physical pages that are fully free. So that's a fragmentation problem.
In theory, you could imagine that virtual addresses 1000000 and 2000000 would map to the same page of physical memory, avoiding the fragmentation. But in practice, and for good reasons, the virtual memory mapping is done on a page by page basis. You can read more about it here: http://en.wikipedia.org/wiki/Page_table.

- 22,632
- 6
- 47
- 54
-
Since 64 bit has a crazy virtual address space none of this matters? Unless you have TB's of ram and keep doing this with like 50TB? – paulm Sep 08 '16 at 14:32
-
1Read more carefully: it explains why *physical* pages get wasted when you have fragmentation. – DS. Sep 09 '16 at 18:25
-
Yeah but the question is "Why is memory fragmentation an issue on a 64-bit machine?" So surely the start of the answer should be "It isn't" ? – paulm Sep 09 '16 at 23:39
-
You are missing the point: memory fragmentation is as big an issue on 64-bit machines as on 32-bit machines. The number of bits determines the possible size of *virtual* address space, while the danger of memory fragmentation is the waste of *physical* memory (which is as limited on 64-bit machines as on 32-bit ones). – DS. Sep 11 '16 at 15:15
-
I'm confused, on Windows I thought it was only an issue when the virtual address space was too fragmented which should be almost impossible on 64 bit – paulm Sep 12 '16 at 01:30
Because all that memory is "wasted" consider an application where you have a lot of internal fragmentation. That process requires more pages in memory because the working set is now scattered in memory and that means its memory footprint is much higher. If this application is contending for physical slots in RAM (machines still really only have about 4 - 8 GB of RAM for a typical home setup) then it causes more page swapping. Generally you want to reduce your applications memory footprint to avoid memory pressure and contention with other applications.
There are cases though where it doesn't really matter, it won't kill you to use an extra megabyte here or there but it all adds up in larger applications. It depends on the situation as to whether or not it is important to have as little fragmentation as possible depending on what you're coding or what the aim of your project is.

- 22,940
- 10
- 58
- 88
-
But what about "memory fragmentation"? A response to this should likely cover how malloc, the OS, and the hardware factors into allocating and mapping the virtual memory... – Dec 17 '11 at 04:54
-
@pst The question from what I understood only asked about the consequences not how it happened in the first place (although I could have read it wrong). – Jesus Ramos Dec 17 '11 at 05:10