Depending on a ton of factors it's more likely to result in an explicit malloc
call than just .allocate
, though it depends on the OS+Arch and it may well be an alias for .allocate
.
Some context and detail in case that answer isn't particularly satisfying:
See this answer for some context.
Ordinarily, objects in java (specifically, byte arrays, which is the underlying data store for your basic HeapByteBuffer, made by .allocate
), have the following properties:
- They are created and remain in the JVM heap memory.
- The garbage collector will inspect them and keep track of them, and will move them around.
- Arrays don't actually have to be contiguous. The JVM will try but nothing in the spec says that the JVM must keep the bytes together. Again the garbage collector comes into play somewhat: If there are 2 large segments of free memory in the heap and you allocate a byte array somewhat larger than either slice, the GC would either have to move it all, or throw an exception, or allow the array to be 'split' to avoid having to move stuff around. I'm not sure any JVMs exist that really do this.
- The memory is also part of the JVM's process; ordinarily as far as the OS is concerned the entire heap (whether they contain live objects or not) is considered 'used memory' by the underlying OS.
With .allocateDirect
, you break a bunch of those rules:
- They are (probably) created separate; exists outside of the heap (e.g. would be in addition to
-Xmx
, the parameter to set max heap size, as an example).
- Presumably this always causes an OS-level
malloc
call, to ask the OS for a contiguous chunk of ram. malloc
calls can take time.
- The block really is contiguous because
malloc
made it.
- The block is never moved around.
Note that there are some gaps in the theory here - if it's explicitly malloc
ed, then it should be explicitly free
d, and yet ByteBuffer neither has a deallocate()
method nor is it an (Auto)Closable. Also, the javadoc reserves lots of rights; for example, .allocateDirect
might do nothing different from allocate
- the JVM is free to use the heap or not. These direct OS-level interactions tend to do that: Java has to run on a wide array of OS+Architecture combos. If some OS+arch combo doesn't have direct buffers, what now? Should .allocateDirect
fail-fast (throw an exception)? That would be sensible, except, only if the spec locks down specifically what a direct bytebuffer guarantees you, and therein lies a problem, because there is tons of variation across OS+Arch combos of what it really means.
"Does not move around" is a requirement for various low-level I/O OS kernel calls (where you tell the kernel: Please just tell the network hardware in the system to directly copy incoming bytes straight into this memory block - that kind of low-level I/O. Not all OSes support it, and not all support it in the same way). A plain jane heap buffer simply can't use that; if the underlying OS does support it and so does the JVM, the JVM has to make its own direct buffer (outside of the heap / in a section cordonned off from the GC to ensure it does not move), and start a separate process to blit those bytes into your buffer, taking into account the GC system as that can move around. In contrast, if you ask e.g. a FileChannel
object to copy bytes from a file to your direct buffer, it might be possible that the native impl backing FileChannel
of your JVM will just tell the OS to tell the SSD to directly do so with no interaction from the OS/CPU whatsoever. Some hardware can do that. But only to 'fixed' memory locations.
Whether your JVM can actually do all that - no guarantees. But if it can, it can only do that if you make a direct buffer.