0

There's a time critical application that handles messages from a trading server where we get around 10K msgs per second... There are times when the application tends to be taking a lot of time in doing the inserts in the database... After several days of going back and forth with the dev team about which side is taking the time, our db team decided to build a simple C# app that resides on a server on the same rack and the same network as the database server. The database in question is sql server 2012 standard.

The times were taken from ado.net this way...

var currT1 = DateTime.Now;                                                     
sqlComm.ExecuteNonQuery();
var loadT = DateTime.Now;

The times from sql server were taken from the startTime and endTime columns from a server side trace... The two servers are time-synched but there's a differences of 10-15 ms...

Now what's making me want to bang my head on something is that while it's understandable the application takes longer than the db (cuz it has to do processing as well as other stuff)... But in some cases, the DB reports it took 4 ms, but the app says it took zero ms!!

I definitely think the bug is with the test app... But there's nothing separating the db call and the two timestamps... The log reads like this... App times (start, end, diff, method) followed by db calls (starttime, endtime, diff)

10:46:06.716    
10:46:06.716    
0:00:00.000 
DataAdapter     
10:46:06.697    
10:46:06.700    
0:00:00.003

is there something else I should provide?

DavidG
  • 113,891
  • 12
  • 217
  • 223
  • VPS? That would explain it - timing, especiall short term, is unreliable on a VPS. – TomTom Sep 15 '14 at 15:20
  • Also not sure using `DateTime.Now` is an accurate way to time things. Much better to use the [Stopwatch](http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.aspx) class. – DavidG Sep 15 '14 at 15:23
  • 3
    `DateTime.Now` only updates as often as your system clock ticks. The default on most PCs is to update the clock approximately every 15.6 milliseconds. You can't measure intervals smaller than that with `DateTime.Now`. On the other hand, SQL Server uses other time sources, or adjusts the system timer update frequency, so that it can provide higher-resolution timestamps. The `datetime` data type has approximately 3ms resolution. – Mike Dimmick Sep 15 '14 at 15:37
  • It now gives much more plausible differences but still there are many instances where the app says it took 1 ms but the db says it took 3 ms... is there something in the server side trace I should see? – Mirza Faraz Beg Sep 16 '14 at 17:05
  • startTime, endTime and duration are taken from the fn_trace_gettable function... But it sure looks dodgy now... – Mirza Faraz Beg Sep 16 '14 at 17:07

1 Answers1

0

Based on the observations from you the helpful lot, we used the stopwatch class... Then we got an even weirder issue... We used the stopwatch.elapsedticks property thinking that dividing it by 10 would give us microseconds... the duration column in the server side trace is in microseconds because it's saved to a file. Still, the time from the application is less than from the database... As it turned out, the property to use was elapsed.tick and not the elapsedtick property to get the microseconds. Dividing elapsed.tick with 10 gave us the microseconds...

So there it is... got both the application and the db to give us very close to accurate (can't be sure ever :) ) times...

The conclusion that I have drawn is to not only not believe the datetime.now .net property but also the startTime and endTime server trace columns... calculating duration from dedicated timers is what's required...

Thanks for the heads up guys...

Jason Aller
  • 3,541
  • 28
  • 38
  • 38