I want to ask some advices about realtime audio data processing. For the moment, I created a simple server and client using python sockets which send and receive audio data from microphone until I stop it (4096 bytes for each packet, but could be much more).
I saw two kinds of different analysis:
- realtime: perform analysis on each X bytes packet and send back result in response
- after receiving a lot of bytes (for example every 1h), append these bytes and store them into a DB. When the microphone is stopped, concatenate all the previous chunk and perform some actions on it (like create a waveplot image for this recorded session).
For this kind of usage, which kind of selfhosted DB can I use ?
how can I concatenate these large volumes of data at regular intervals and add them to the DB ?
For only 6 minutes, I received something like 32MB of data. Maybe I should put each chunk in a redis as soon as I receipt it, rather than keeping it in a python object. Another way could be serialize audio data into b64. I'm just afraid of losing speed since I'm currently using tcp for sending data.
Thanks for your help !