I am interested in a new IoT project called OpenBCI, which is basically an open source EEG for reading and processing brain waves and other biodata. In their docs they claim that the data transmitted over-the-air (via RFDuino) sends 24-bit data. To convert the 24-bit values into 32-bit signed integers, they suggest the following Java-friendly Processing code:
int interpret24bitAsInt32(byte[] byteArray) {
int newInt = (
((0xFF & byteArray[0]) << 16) |
((0xFF & byteArray[1]) << 8) |
(0xFF & byteArray[2])
);
if ((newInt & 0x00800000) > 0) {
newInt |= 0xFF000000;
} else {
newInt &= 0x00FFFFFF;
}
return newInt;
}
I guess I'm trying to understand exactly what is going on here. Let's take the first blurb of code:
int newInt = (
((0xFF & byteArray[0]) << 16) |
((0xFF & byteArray[1]) << 8) |
(0xFF & byteArray[2])
);
- Why is it safe to assume there are 3 bytes in the input?
- What is the significance of the
0xFF
value? - What is the purpose of the left-bitshifting (
<<
)?
The second segment is also a bit of a mystery:
if ((newInt & 0x00800000) > 0) {
newInt |= 0xFF000000;
} else {
newInt &= 0x00FFFFFF;
}
- Why
newInt & 0x00800000
? What's the significance of0x00800000
? - Why the
if-else
based on postive vs. non-negative result of the above operation? - What is the significance of
0xFF000000
and0x00FFFFFF
?
I guess there's just a lot of handwaving and magic going on with this function that I'd like to understand better!