My data was in this form. Where you can see that the 3rd column (2nd if you start with 0) touches the one before when it's values rise to the next order of magnitude. As well as artifacts in the last column that are from none data input being recorded.
17:10:39 2.039 26.84 4.6371E-9 -0.7$R200$O100
17:10:41 2.082 27.04 4.6334E-9 -0.4
17:10:43 1.980 26.97 4.6461E-9 0.3
17:10:45 2.031 26.87 4.6502E-9 1.0$R200
17:10:47 2.090 27.09 4.6296E-9 0.1
...
18:49:40 1.930226.34 2.8246E-5 7.1
18:49:42 2.031226.04 2.8264E-5 8.2
Now I did fix this all by hand by adding a "|" deliminator instead " ", and cutting away the few artifacts, but it was a pain.
So in the prospect of getting even larger data sets in the future from the same machine, are there any tips on how to write a script in python or if there are any linux based tools out there already to fix this csv/make a new fixed csv out of this ?