2

I've seen many people on the web claiming that "the JVM requests a contiguous unfragmented block of memory from the OS when it starts". What I can't understand is how this correlates with the notion of virtual memory.

OS might swap out any process's memory pages to disk, then load them again into RAM - chances are they will be loaded into different locations, so physical memory used by the process will no longer be contiguous.

As for the process's virtual memory - that will always be "contiguous" from the process's point of view, as each process has its own address space.

Thus, what I'd like to understand is:

  • Is the statement that memory allocated by OS to a JVM has to be
    contiguous really true?
  • If so, how does OS ensure memory stays contiguous considering it might be swapped out to disk and swapped in back to RAM?
  • If the statement it not true, what might be the reasons why OS
    would deny the process the virtual memory it asks for? Memory
    overcommitting settings?
Nikita Tkachenko
  • 2,116
  • 1
  • 16
  • 23

1 Answers1

4
  1. JVM allocates memory for different purposes. Of course, this is not just a single chunk of memory. Some JVM structures need to occupy a contiguous chunk, other do not.

  2. If we talk about Java Heap in HotSpot JVM - yes, it is a contiguous range of virtual address space.

  3. The contiguous virtual memory does not have to be backed by contiguous physical memory. Page table is responsible for translating virtual addresses to physical, and it makes possible to map the contiguous virtual address range to fragmented physical pages even after swapping etc.

  4. While it's usually not a problem to find a contiguous virtual address range for Java heap or another JVM structure on a 64-bit system, this can be a real issue on a 32-bit system.

  5. You are right, OS memory overcommitment settings may cause mmap or mprotect call to fail, if the process' total virtual memory size exceeds a threshold.

apangin
  • 92,924
  • 10
  • 193
  • 247
  • Thank you for the clarifications. What I fail to understand in relation to point 4 is how a process's virtual address space can become fragmented before JVM had the chance to start (which seems to be the case with some occurrences of “Could not reserve enough space for object heap” error)? Could it be because certain shared libraries are loaded into fixed addresses in virtual memory? Can there be other reasons? – Nikita Tkachenko Apr 13 '20 at 08:53
  • 2
    @NikitaTkachenko Right, by the time JVM allocates Java Heap, a number of shared libraries are already loaded (not at fixed addresses though). – apangin Apr 13 '20 at 09:26
  • 2
    @NikitaTkachenko mind that to ensure that the specified maximum heap size stays contiguous, the associated logical address range has to be reserved at startup time and hence, isn’t available to native code even if the Java application doesn’t actually use it. So for 32 bit instances of the HotSpot JVM, not only libraries loaded at startup but also libraries you might want to load later-on are/were a problem. You have/had to carefully make the right trade-off even before the JVM started. – Holger Apr 14 '20 at 14:54