I have a time series in a DataFrame. The time series capture trajectories of the same path traversed, i.e. acceleration and rotation in x, y and z direction and a label (str). The timestamp is used to indicate the point in time in which values where observed. The problem know is that the timestamps just consist of seconds:
timestamp ...
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
2.0
2.0
2.0
2.0
2.0
...
Each recorded time series have around 10000-20000 such rows. I need them now to have the same dimensions, i.e. row size is the same for all time series. So, I thought about resampling: Every time series should have the average row size, in this case 15000. Each time series with less than 15000 rows should be upsampled, whereas time series with more than 15000 rows should be downsampled. In general, there are around 15-20 observations per second, i.e. a timestamp is repeated over 15-20 rows. How would I achieve this without losing too many information? Do you have any ideas? This step is needed for having the right format to train a specific network.