So basically I have a dataset with 2 columns:
| Time (millis) | Speed (m/s) |
--------------------------------
| 0 | 0.5 |
| 20 | 1.5 |
| 40 | 4.5 |
| 60 | 8.5 |
| 80 | 8.9 |
| 100 | 7.5 |
| 120 | 4.3 |
| 140 | 1.5 |
| 160 | 0.5 |
| 180 | 0.5 |
| 200 | 0.5 |
| 220 | 0.5 |
This is a short sample of a person running with its speed in chunks of 20 milliseconds.
So I'm trying to detect sprints (when the person is running at full speed over a short distance).
Due to the nature of my requirements I'm writing a program to calculate this in c
. I can easily do it in a dirty manner, defining some min, max, looking for peaks and there's the sprint. But I'm thinking there must a better way to do it, maybe some machine learning algorithm I'm not aware of.
Would be great if I could teach the program what a sprint is by showing it some examples and then detect them with no more intervention from my side. I'm just not sure how to get started on that.
Has anyone come across something similar and can point me in the right direction?