First time posting so hopefully I can relate what I'm trying to ask.
I have cpp code that records timestamps down to the nanosecond level from an fpga. It is writing this value to a csv. In the same csv I am calculating the difference between consecutive timestamps.
When I export it to python and do the timestamp difference, I get mostly 1s (its based off a PPS), but also random impulse points. Any idea why I get all 1s in cpp but mostly 1s and occasionally 1 +- 3E-14?
any info or guidance would be appreciated. From using search, it seems like it could be due to floating points? but shouldnt that happen for both?