0

I have a text file with two columns and 135001 rows. First column is amplitude and second column is related time. For example some of the rows are like below and all are similar to this sample:

0.000000 00:04:07.680000
0.000000 00:04:08.320000
0.000002 00:04:08.960000
0.000002 00:04:09.600000
0.000000 00:04:10.240000

I plot the file and I cut a small part of that which is like below: Plot the data

I need to detect in which point in my data there is a change. Some thing I need to do is exactly counting the number of square (which I added by hand to the figure of data) in the picture and also extract the related time of them. How can I do this. The important thing is that I should count each section in the squares as one. I was thinking of derivate but I think it wont work. I have found a solution in stackoverflow but it is in R. If some could tell some thing similar in python that is great. The similar solution in R is here: How to find changing points in a dataset

OnEarth
  • 57
  • 1
  • 6
  • A quick glance at the R reflects my initial thought and that's to create a threshold to which the difference reflects the start and stop of a changing time series. – Kyle Jul 31 '19 at 14:06
  • The thereshold should be in time series values, or in derivative of that? – OnEarth Jul 31 '19 at 14:09
  • It appears his threshold is used for the time series values, ex from the post provided. --> c(cumsum(rle(abs(diff(vec))>10)$lengths)+1) This tells me that he applies a difference to the vector, Python should have something similar, maybe in numpy? That compares the current to the next value. I'd suggest looking at R code, you'll find that Matlab, Python and R in my opinion, translate nicely when you know what the functions do. – Kyle Jul 31 '19 at 14:13

0 Answers0