I'm studying a working program that between others verifies the validity of an incoming message. The CRC used is CCITT 16, the initial value of the CRC is set to 0xFFFF, and the message is sent in bytes that are received LSb first and the bytes the MSB first. The message forms a long bitstream up to 1024 bits long, including the 16 CRC bits at the end.
I fully understand the calculation of the CRC value. What puzzles me, and I can find no reference justifying it, is the validity check performed after all message has been received. Namely, the CRC value is checked against 0xF0B8 instead of 0x0000. Am I missing something?
// crcValue is the 16 bit CRC variable under construction
// dataBit is a byte containing one bit of the data bitstream at LSb
...
// CRC calculation bit by bit
if(dataBit ^ (crcValue & 0x0001)) crcValue = (crcValue >> 1) ^ 0x8408;
else crcValue >>=1;
//After running over all the bitstream the crcValue is checked
if(crcValue != 0xF0B8) ... // reject data
else ... // accept data
The program works. As said I cannot understand where the 0xF0B8 value comes from.
Thank You in advance