I am working with large files and I am using MappedByteBuffer to read and write operations. I have a little lack of knowledge so I am wondering somethings about it.
MappedByteBuffer buf = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, offset, size);
I know that ByteBuffer limit is Integer.MAX_VALUE for size so how should I set my size for MappedByteBuffer? Should I use small pieces or Integer.MAX_VALUE?
So If I increase my mappingsize is my applications reading and writing performance also increasing?
While this size increases is my memory usage also increasing at a time? I am wondering this because I am creating multiple files to read and write. So maybe If one file allocate 2gb of memory and If I have 6 files I need 12gb memory or my idea is completely wrong.
Is it related to JVM -Xmx or my physical memory?
This is my usage:
List<MappedByteBuffer> mappings = new ArrayList<MappedByteBuffer>();
int mSize = 25;
long MAPPING_SIZE = 1 << mSize;
File file = File.createTempFile("test", ".dat");
RandomAccessFile raf = new RandomAccessFile(file, "rw");
ByteOrder byteOrder = java.nio.ByteOrder.nativeOrder(); // "LITTLE_ENDIAN";
try {
long size = 8L * width * height;
for (long offset = 0; offset < size; offset += MAPPING_SIZE) {
long size2 = Math.min(size - offset, MAPPING_SIZE);
MappedByteBuffer buf = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, offset, size2);
buf.order(byteOrder);
mappings.add(buf);
}
}