Currently, I am trying to split a buffered audio signal (buffer size = 1024 samples), read in real-time from ALSA on Linux, into several bands in order to output a bunch of numeric values of their levels (calculated as RMS values). So far, I have the following code:
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <math.h>
#define BUFFER_SIZE 1024
// Random numbers generated with the command
// python -c 'import random; ran = random.Random(); print([ran.randint(0, 65536) for _ in range(1024)])'
// for testing purposes
// Ommited from StackExchange post for length reasons
short buffer[BUFFER_SIZE] = { 57054, 11874, ..., 22716, 57055 };
int main(int argc, char** argv) {
long buffer_sum;
for (int i = 0; i < BUFFER_SIZE; i++) {
buffer_sum += buffer[i]*buffer[i];
}
double rms = sqrt(buffer_sum/BUFFER_SIZE);
double Pvalue = rms * 0.45255;
double dB = 20 * log10(Pvalue);
printf("%lf\n", dB);
return 0;
}
However, this code can only output the average gain of the entire frequency spectrum. Essentially, what I am trying to achieve is an effect like you can see in this video, but with raw numbers being output (printed to stdout as a simple comma-separated list) instead of a graphical visualization. What would be a good way to go about this? To be clear, the primary goal is to be a basic visualizer, so the filter doesn't need to be very accurate. Phase issues and such aren't very important here. It should, however, be fast to calculate in real time. The goal is not to generate a graph ahead of time, like what the app Spek does. This is also part of the reason why I chose C.
PS: Please note that I am very new to both DSP and the C programming language, as well as not being particularly great at math.