I'm doing a project where I'm aquiring accelerometer data from a custom designed measurement system. The measurement system is an accelerometer connected to a Raspberry pi 3, running a Python script which samples data and writes them to a csv file at occasional times. The csv files are then imported to a computer and run FFT analysis in matlab.
I would like to calculate an average of the FFT result from several datasets. This is to be able to determine if there is any relationship in frequency and amplitudes between the different datasets.
When I run the measurement system in a python script and Raspberry Pi 3, the frequency stability cannot be said to be the best. In the python script I have set the sample rate to be 615 hz, which the Pi can partially maintain until it has to write data to the csv file. The script is currently set to write to file when the buffer is 615 samples in length, effectively 615 * 4. Due to this procedure the time between two samples "drifts" from 1.6 ms to approximately 20 ms when I run a csv write. The average sample rate determined by matlab seems to be approximately 600 hz, for most of the data sets I've analyzed so far. Due to the inconsistency of sample rate in the python script, and fact that I run samples with various lengths, I have frequency bins of various sizes when I compare the different data sets.
I have tried to investigate if the "sample rate drift" could give corrupted FFT results, without any significant findings. The FFT results seems to be partially ok, but I don't have any reference to compare them to. Have anyone experienced any similiar cases with FFT with a "non-continous" sample rate?
Are there any suggestions for how I can calculate an average of multiple FFTs, when the datasets differ slightly in "detected sample frequency" and sample length?