1

How actually a buffer optimize the process of reading/writing?

Every time when we read a byte we access the file. I read that a buffer reduces the number of accesses the file. The question is how?. In the Buffered section of picture, when we load bytes from the file to the buffer we access the file just like in Unbuffered section of picture so where is the optimization?
I mean ... the buffer must access the file every time when reads a byte so even if the data in the buffer is read faster this will not improve performance in the process of reading. What am I missing?

enter image description here

luk2302
  • 55,258
  • 23
  • 97
  • 137
Ica Sandu
  • 105
  • 5

3 Answers3

5

The fundamental misconception is to assume that a file is read byte by byte. Most storage devices, including hard drives and solid-state discs, organize the data in blocks. Likewise, network protocols transfer data in packets rather than single bytes.

This affects how the controller hardware and low-level software (drivers and operating system) work. Often, it is not even possible to transfer a single byte on this level. So, requesting the read of a single byte ends up reading one block and ignoring everything but one byte. Even worse, writing a single byte may imply reading an entire block, changing one bye of it, and writing the block back to the device. For network transfers, sending a packet with a payload of only one byte implies using 99% of the bandwidth for metadata rather than actual payload.

Note that sometimes, an immediate response is needed or a write is required to be definitely completed at some point, e.g. for safety. That’s why unbuffered I/O exists at all. But for most ordinary use cases, you want to transfer a sequence of bytes anyway and it should be transferred in chunks of a size suitable to the underlying hardware.

Note that even if the underlying system injects a buffering on its own or when the hardware truly transfers single bytes, performing 100 operating system calls to transfer a single byte on each still is significantly slower than performing a single operating system call telling it to transfer 100 bytes at once.


But you should not consider the buffer to be something between the file and your program, as suggested in your picture. You should consider the buffer to be part of your program. Just like you would not consider a String object to be something between your program and a source of characters, but rather a natural way to process such items. E.g. when you use the bulk read method of InputStream (e.g. of a FileInputStream) with a sufficiently large target array, there is no need to wrap the input stream in a BufferedInputStream; it would not improve the performance. You should just stay away from the single byte read method as much as possible.

As another practical example, when you use an InputStreamReader, it will already read the bytes into a buffer (so no additional BufferedInputStream is needed) and the internally used CharsetDecoder will operate on that buffer, writing the resulting characters into a target char buffer. When you use, e.g. Scanner, the pattern matching operations will work on that target char buffer of a charset decoding operation (when the source is an InputStream or ByteChannel). Then, when delivering match results as strings, they will be created by another bulk copy operation from the char buffer. So processing data in chunks is already the norm, not the exception.

This has been incorporated into the NIO design. So, instead of supporting a single byte read method and fixing it by providing a buffering decorator, as the InputStream API does, NIO’s ByteChannel subtypes only offer methods using application managed buffers.


So we could say, buffering is not improving the performance, it is the natural way of transferring and processing data. Rather, not buffering is degrading the performance by requiring a translation from the natural bulk data operations to single item operations.

Holger
  • 285,553
  • 42
  • 434
  • 765
2

Basically for reading if you request 1 byte the buffer will read 1000 bytes and return you the first byte, for next 999 reads for 1 byte it will not read anything from the file but use its internal buffer in RAM. Only after you read all the 1000 bytes it will actually read another 1000 bytes from the actual file.

Same thing for writing but in reverse. If you write 1 byte it will be buffered and only if you have written 1000 bytes they may be written to the file.

Note that choosing the buffer size changes the performance quite a bit, see e.g. https://stackoverflow.com/a/237495/2442804 for further details, respecting file system block size, available RAM, etc.

Basil Bourque
  • 303,325
  • 100
  • 852
  • 1,154
luk2302
  • 55,258
  • 23
  • 97
  • 137
  • Which buffer size is the best? – Omid.N Aug 02 '20 at 16:21
  • 1
    @Omid.N https://stackoverflow.com/questions/236861/how-do-you-determine-the-ideal-buffer-size-when-using-fileinputstream – luk2302 Aug 02 '20 at 16:21
  • The buffer read 1000 bytes, ok but the cost for buffer reading must be less than unbuffered reading? why the cost is smaller at buffered, somehow it's easyer to trasfer data from file to buffer and then to the program. What make the buffer which is a temporary zone of memory so special s.t data is read faster? – Ica Sandu Aug 02 '20 at 21:25
  • @IcaSandu Yes, RAM Access is faaar quicker than IO / disk access. You can search google for comparisons. – luk2302 Aug 02 '20 at 21:31
  • @IcaSandu Without buffering you would have to make a call to the storage device (or other I/O device) every time you read or write a byte. That's an oversimplification since there are likely buffers at the I/O device and operating system levels as well. The point is that it's more efficient to read and write in bulk, and it's quicker to read from/write to RAM. – Slaw Aug 02 '20 at 21:31
2

As stated in your picture, buffered file contents are saved in memory and unbuffered file is not read directly unless it is streamed to program.

File is only representation on path only. Here is from File Javadoc:

An abstract representation of file and directory pathnames.

Meanwhile, buffered stream like ByteBuffer takes content (depends on buffer type, direct or indirect) from file and allocate it into memory as heap.

The buffers returned by this method typically have somewhat higher allocation and deallocation costs than non-direct buffers. The contents of direct buffers may reside outside of the normal garbage-collected heap, and so their impact upon the memory footprint of an application might not be obvious. It is therefore recommended that direct buffers be allocated primarily for large, long-lived buffers that are subject to the underlying system's native I/O operations. In general it is best to allocate direct buffers only when they yield a measureable gain in program performance.

Actually depends on the condition, if the file is accessed repeatedly, then buffered is a faster solution rather than unbuffered. But if the file is larger than main memory and it is accessed once, unbuffered seems to be better solution.

Fahim Bagar
  • 798
  • 7
  • 17