For java.io.InputStream, there are two primary read functions int read() and public int read(byte[] b, int off, int len) .
Similarly, for java.io.OutputStream there are two functions write(b) and write((byte[] b, int off, int len))
While I understand the basic difference, but on reading the description of write(b), it says that it says "The byte to be written is the eight low-order bits of the argument b. The 24 high-order bits of b are ignored.". Now if that's the case, then we are actually wasting the remaining 24 bits out of 32-bit instruction set which CPU would load for an integer. Instead if I use the other write ((byte[] b, int off, int len)), then I am occupying heap/stack for the size of the byte array. While I am trying to think of which one works better for high scalability, I cant ignore that write(b) wastes 24 bits(3 bytes), while on the other hand if I use the read/write(byte[] b, int off, int len), i risk higher stack sizes. So, what is the best option to choose?
In a workaround, I tried to extend InputStream and OutputStream and override read(b) and write(b) functions by providing a byte[4] to use all the 32 bits. It works just fine, but still have to see if this has any performance enhancements. Its very similar to using read/write(4, 0, int 4)
I will appreciate any help/comment on this topic.