0

In Linux or typically an OS with MMU for virtual memory management system like x86 architecture, I am realizing a scenario through an example where memory is available yet malloc can fail. Please confirm if this is actually possible.

Consider we have 4GB memory free in the system and we allocate 1 million of chunks of two sizes one after another, first is 3k and next is 1k which means we allocate 4k of one total chunk though there are two chunks and 1 million of this 4k total chunk, so we allocate total of 4GB of memory with each 1mb size. Allocation is in sequence with 3k chunk allocated for which OS MMU will allocate 1 page of 4k and it will then allocate rest of 1k of page for next 1k chunk.

Next consider that the process which is doing memory allocation will deallocate or free 1 million of 1k chunks so we should get 1GB of free memory. However, these chunks are sequential in address alternately allocated with 3k chunks, so we cannot reclaim this memory for further allocation unless next process alloc request is 1k or less. If the process in the future never request alloc of chunk equal or less than 1k or if it requests only for example 3k chunks in the future then this memory of 1GB freed by freeing 1k chunks will never be used.

How does MMU of OS like Linux will handle this? If this case is real that memory will go unused and there will be no further memory alloc possible, then in general this can have more scenarios like this. If Linux or any OS MMU handle this scenario and does not get memory wasted, how does it do that?

I know page allocation and how buddy allocators on Linux work though I could not find any solution to this issue with that else please explain if that can solve this. Apparently when 3k chunks are in use, we cannot break those virtual to physical memory mapping while process continues.

tla
  • 855
  • 1
  • 14
mr.anand
  • 11
  • 2
  • The specific scenario you describe doesn't happen because allocation algorithms are smarter than simply "*allocate ... chunks of two sizes one after another*". E.G. study the [slab allocator](https://www.kernel.org/doc/gorman/html/understand/understand011.html). Linux also *overcommits* memory, i.e. "memory" can be "allocated" to a process but that virtual memory is not actually available until it is actually accessed. – sawdust Mar 04 '23 at 23:34
  • allocators will be smart typically for production quality software like they will use memory pools but simple example like how if hypothetically 1 million connections land on web server and how memory allocation happens, there are many possible scenarios that can cause this problem. I found on internet some lwn.net articles that points that fragmentation typically happens. I could not trace if there is definitive benchmarking done how much it happens and how much memory loss it causes. – mr.anand Mar 05 '23 at 07:10

1 Answers1

1

The specific scenario you mentioned is unfortunately not solved by virtual memory if your page size is 4K. Compaction doesn't work for this particular case either. In practice, slab allocation works to decrease fragmentation by allocating larger chunks of memory for many objects/structs of the same size, see https://en.wikipedia.org/wiki/Slab_allocation

kcompactd is for proactive compaction and a remedy for hugepage fragmentation, which is not too relevant to the question asked, but still interesting nonetheless. Hugepage fragmentation is a real problem that happens in production. If you're interested, https://lwn.net/Articles/592011/ is a good place to start.

Dummyc0m
  • 111
  • 6