I have the following method:
int convert3ByteChunkTo24Bits(final byte[] bytes) {
int bitsFor3ByteChunk = 0;
bitsFor3ByteChunk = bitsFor3ByteChunk | (bytes[0] & 0b11111111);
bitsFor3ByteChunk = bitsFor3ByteChunk << 8;
bitsFor3ByteChunk = bitsFor3ByteChunk | (bytes[1] & 0b11111111);
bitsFor3ByteChunk = bitsFor3ByteChunk << 8;
bitsFor3ByteChunk = bitsFor3ByteChunk | (bytes[2] & 0b11111111);
return bitsFor3ByteChunk & 0b00000000_11111111_11111111_11111111;
}
Purpose of the method is to append bytes passed and return it in an int format. Here is a test method I have:
void testConvert3ByteChunkTo24Bits() {
final byte ca = (byte) 0xCA;
final byte fe = (byte) 0xFE;
final byte ba = (byte) 0xBA;
final byte[] man = {ca, fe, ba};
final int bytesIn24Bits = base64EncoderHelper.convert3ByteChunkTo24Bits(man);
System.out.println(bytesIn24Bits);
}
Output of the result will be:
13303482
which in binary is:
00000000110010101111111010111010
This is correct and is the intended results since:
0xCA = 11001010
0xFE = 11111110
0xBA = 10111010
So, so far so good. There is just one thing I do not understand, why do I need the logical ands as in
(bytes[0] & 0b11111111)
When I do not do the bitwise-and operations, so if the implementation is as follows:
int convert3ByteChunkTo24Bits(final byte[] bytes) {
int bitsFor3ByteChunk = 0;
bitsFor3ByteChunk = bitsFor3ByteChunk | (bytes[0]);
bitsFor3ByteChunk = bitsFor3ByteChunk << 8;
bitsFor3ByteChunk = bitsFor3ByteChunk | (bytes[1]);
bitsFor3ByteChunk = bitsFor3ByteChunk << 8;
bitsFor3ByteChunk = bitsFor3ByteChunk | (bytes[2]);
return bitsFor3ByteChunk & 0b00000000_11111111_11111111_11111111;
}
result will be:
00000000111111111111111110111010
What am I missing? What does
bytes[0] & 0b11111111
fix here? (I found it kind of by trial - error and still curious why it works..)