I am trying to write a high data rate UDP streaming interface simulator/tester in Java 8 to a realtime machine that has a very accurate time processor card. Every message has a time field in it and this field is in microseconds resolution. The interface relies on the high resolution time processor for packet ordering. The interface relies on the high precision time card which I don't have and need to simulate out of the equation. I figured I could get away with using something like this:
TimeUnit.MILLISECONDS.toMicros(System.currentTimeMillis());
It does work but after running for extended periods of time I found UDP bites me because I send a couple hundred packets out of order with the same exact time stamp and the other side of the interface can't tell that the packets it received were out of order. The interface is tolerant of this to an extent but this isn't really an issue on the real system with the high precision clocks.
To mitigate this I have added a sense of synthetic microseconds to my currentTimeMillis() as follows:
class TimeFactory {
private long prev;
private long incr;
public long now() {
final long now = TimeUnit.MILLISECONDS.toMicros(System.currentTimeMillis());
long synthNow = now;
if(now == prev) {
if(incr < 999) {
incr += 1;
}
synthNow += incr;
} else {
incr = 0;
}
prev = now;
return synthNow;
}
}
Has anyone ever dealt with synthetic time like this? Is there any other way to tighten this code up or even a better way to handle this (using nanoTime somehow)? If I ever did send more then 999 packets would it be safe to increment into the milliseconds range (ie: increment + 1000 or more)? It looks like I am getting around ~10-15ms difference between currentTimeMillis() calls but I'm sure this is very system dependent.