I'm following this guide on using ADPCM to encode 12-bit ADC values into 4-bit samples on an MSP430. It works well as-is, but it's designed to handle 16-bit ADC readings, so I am losing resolution with the current algorithm. I'm wondering how the following tables would change for a 12-bit input signal instead.
// Index changes
const signed char StepSizeAdaption[8] = {
-1, -1, -1, -1, 2, 4, 6, 8
};
// Quantizer step size lookup table
const int StepSize[89] = {
7, 8, 9, 10, 11, 12, 13, 14,
16, 17, 19, 21, 23, 25, 28, 31,
34, 37, 41, 45, 50, 55, 60, 66,
73, 80, 88, 97, 107, 118, 130, 143,
157, 173, 190, 209, 230, 253, 279, 307,
337, 371, 408, 449, 494, 544, 598, 658,
724, 796, 876, 963, 1060, 1166, 1282, 1411,
1552, 1707, 1878, 2066, 2272, 2499, 2749, 3024,
3327, 3660, 4026, 4428, 4871, 5358, 5894, 6484,
7132, 7845, 8630, 9493,10442,11487,12635,13899,
15289,16818,18500,20350,22385,24623,27086,29794,
32767
};
Since I am also handling the decoding, the tables don't have to conform to any standard, but I would like to follow one if it exists for 12-bit values.