6

I have a thread to generate a network packet every 40 ms (25 Hz), it's an infinite loop (until told to stop), and I'm using thread.sleep.

When I build the packet, one of the values is the current GPS time, using DateTime.UtcNow and adding the leap seconds.

This works fine when I start, but it drifts with time, about 2 hours later, it's 5 seconds behind.

I have a Symmetrom GPS Time Server and I'm using their software as the NTP client, and it says the cumulative drift on the PC is about 1.2 seconds (most of that I've noticed is drift while the PC is off and not syncing to NTP).

Anyone have any idea whats going wrong? I know thread.sleep isn't perfect timing, and Windows isn't an RTOS, but the drift doesn't make sense, dropping frames would.

I can't post code due to some proprietary and ITAR issues, but I can post a rough outline:

while(!abort) { 
   currentTime = DateTime.UtcNow + leapSeconds ; 
   buildPacket(currentTime); 
   stream.Write(msg, 0, sendSize); 
   //NetworkStream Thread.Sleep(40); 
}

I'm in Windows 7 and using Visual Studios 2010.

mezoid
  • 28,090
  • 37
  • 107
  • 148
Mike D
  • 71
  • 2
  • 3
    You are using `Sleep(40)`, then doing something? Have you accounted for the amount of time it takes to build the packet? – Jeremy Holovacs Jan 13 '14 at 20:55
  • 2
    I'd say 5 seconds isn't that bad, considering you had over 180k ticks. Even a very small imprecision in Thread.Sleep ([and it is indeed not perfect](http://stackoverflow.com/a/1303708/2316200)) will lead to a big interval over time. It's even worse with Windows.Timers, where you can get 10 seconds off after about 30 minutes. – Pierre-Luc Pineault Jan 13 '14 at 20:57
  • So you're basically saying that after two hours, the system time differs from the time of the GPS receiver by 5 seconds? That would simply indicatine that the clock synchronization is not working or not done often enough. Keeping the PC clock in sync with the GPS receiver requires constant clock alignment. – PMF Jan 13 '14 at 20:58
  • @PMF Suppose thread.Sleep(40) takes 40.1 msec in reality. then this would lead to an 18-seconds drift. As Pierre-Luc Pineault said, It is better than I would expect. – L.B Jan 13 '14 at 21:00
  • Perhaps using a Timer would be better. – Rafael Jan 13 '14 at 21:03
  • 1
    Try decoupling the building and sending of the packet from the measuring of time. From your looping thread kick off a separate thread to do the work. – Eric Scherrer Jan 13 '14 at 21:05
  • 1
    @Rafael Nope, accuracy of the classic timer is 55ms, which is over OP's timespan. – Pierre-Luc Pineault Jan 13 '14 at 21:06
  • Here's what I don't understand, my sequence of events is: Wait approx 40 ms Get the current time build packet repeat If the 40 ms isn't exact, how does that cause the time to drift? I expect the timing of the packets to not be 40 ms apart, but that each time is accurate. The PC is in sync to the time server within a few milliseconds, according to Symmetricom, so I'm pretty sure that's not the source of my problem. – Mike D Jan 13 '14 at 21:10
  • I have edited your title. Please see, "[Should questions include “tags” in their titles?](http://meta.stackexchange.com/questions/19190/)", where the consensus is "no, they should not". – John Saunders Jan 13 '14 at 21:12
  • I can't post code due to some proprietary and ITAR issues, but I can post a rough outline: while(!abort) { currentTime = DateTime.UtcNow + leapSeconds ; buildPacket(currentTime); stream.Write(msg, 0, sendSize); //NetworkStream Thread.Sleep(40); } – Mike D Jan 13 '14 at 21:17
  • @MikeD I would dynamically calculate the *drift* in the loop and adjust the `40` according to this. Some times `39` or `41` for ex. – L.B Jan 13 '14 at 21:24
  • @miked: You say the time is in sync, so I don't understand the problem. How do you measure the drift? – PMF Jan 13 '14 at 21:27
  • OK, I discovered that I am doing something very stupid and apologize. I am using DateTime.ticks for my time, instead of calculating microseconds from midnight using the system time, which is not what I intended to use, and probably the source of my drift. I should know in a couple of hours. (The software I'm passing packets from wants the date in terms of the Julian Date and microseconds since midnight) – Mike D Jan 13 '14 at 21:40
  • That's not my only problem, it's drifting less but still drifting. Going to look at profiling next to see if that might give me a clue. – Mike D Jan 13 '14 at 23:04

4 Answers4

1

I think this happens because the time that a while loop executes is 40 ms (your sleep) + the time necessary to execute the code that builds the packet.

Have you tried using a System.Threading.Timer ? This way your code will execute in a separate thread then the one that is counting your time. However, I don't think the performance is good enough to keep your real time application running for long.

Cosmin Vană
  • 1,562
  • 12
  • 28
  • 2
    And also because the thread is not awaken at _exactly this precise millisecond_ you want it to be, but when the OS gives it the chance to (and it depends on a lot of factors, like the current process count) – Pierre-Luc Pineault Jan 13 '14 at 21:04
0

There is probably many overhead involved, including network IO. You could decouple the timing from the creation like this:

public void Timing()
{
    // while (true) to simplify...
    // You should probably remember the last time sent and adjust the 40ms accordingly
    while (true)
    {
        SendPacketAsync(DateTime.UtcNow);
        Thread.Sleep(40);
    }
}

public Task SendPacketAsync(DateTime timing)
{
    return Task.Factory.StartNew(() => {
        var packet = ...; // use timing
        packet.Send(); // abstracted away, probably IO blocking
    });
}
Karhgath
  • 1,809
  • 15
  • 11
0

The other answers are on to the problem... You have overhead on top of the fact that you are sleeping.

TPL

If you operate in the world of TPL, then you can create a pretty simple solution:

while(running)
  await Task.WhenAll(Task.Delay(40), Task.Run(()=>DoIO()));

This is a great solution because it will wait for the IO operation (DoIO()) in case it takes longer than 40ms. It also avoids using Thread.Sleep() which is always ideal...

Timer

So instead, use a timer (System.Threading.Timer) that will fire every 40ms. This way, you can be building and sending the packet, but the timer is also still counting. The risk here is if the IO operation takes longer than 40ms, you have a race condition.

NOTE

40ms is an OK time to expect an accurate callback, HOWEVER, lets say you decided you needed 4ms, then the OS probably wouldn't be able to provide this kind of resolution. You would need a real-time OS for this kind of accuracy.

poy
  • 10,063
  • 9
  • 49
  • 74
0

I would guess you are being bitten by two things.

  • The default timer resolution in modern Windows is 10 ms (although there are no guarantees about that). If you leave it at that you will, at best, have lots of jitter. You can use the multimedia API to increase the timer resolution. Search MSDN for timeGetDevCaps, timeBeginPeriod and timeEndPeriod.
  • You should calculate your sleep interval based on some fixed start time, rather than sleeping 40 ms on every iteration. The code below illustrates that.

Here is some skeletal code:

static void MyFunction()
{
    //
    // Use timeGetDevCaps and timeBeginPeriod to set the system timer
    // resolution as close to 1 ms as it will let you.
    //

    var nextTime = DateTime.UtcNow;

    while (!abort)
    {
        // Send your message, preferably do it asynchronously.

        nextTime = nextTime.AddMilliseconds(40);
        var sleepInterval = nextTime - DateTime.UtcNow;

        // may want to check to make sure sleepInterval is positive.

        Thread.Sleep(sleepInterval);
    }

    //
    // Use timeEndPeriod to restore system timer resolution.
    //
}

I don't know of any .Net wrappers for the multimedia time* API functions. You will probably need to use PInvoke to call them from C#.

OldFart
  • 2,411
  • 15
  • 20