0

This is basically a question about how Linux virtual memory works. I am working on a simple custom allocator that is part of an algorithm described in a research paper. The allocator is rather simple perhaps to keep the implementation simple but is predicated on the user space ram being allocated all at once in a naive implementation then divided as needed by malloc. Is there a downside in doing this in Linux? Does it cause memory pressure to sbrk some large value or do something similar with mmap?

C. Cheng
  • 80
  • 9
  • I suppose that depends on the kernel configuration (try `sysctl -a | grep '^vm\.'`). Overcommit and byte reservation could be an issue. – root Jun 23 '19 at 06:34
  • I am not sure about this but does virtual memory allocate a copy of a zero page as default and do a COW? I am curious if this is the case (I seem to remember reading this somewhere) and what the overhead would be if it does since I do assume it still needs to keep something to record that pages have been reserved. – C. Cheng Jun 23 '19 at 09:50
  • assuming you're talking about a server (or something with an MMU), it only allocates pages, it doesn't write anything to them, they are not zeroed at allocation, but rather when they are faulted in (see [this question](https://stackoverflow.com/a/6005003/10678955)). also, make sure you set ulimit(1) correctly. – root Jun 23 '19 at 17:39

0 Answers0