A SQL Server datetime
value is internally a tuple, holding 2 signed 32-bit integers:
The high order integer is the offset, in whole days, from the epoch (zero point) of the SQL Server calendar, which happens to be 1 Jan 1900 00:00:00.000
.
The low order integer is the offset from start-of-day, not in milliseconds, but in "ticks" of approximately 1/300 of a second.
getdate()
returns the current date and time of day as a datetime
value. The expression cast( getdate() as int )
is exactly equivalent to
datediff(day,'1 Jan 1900 00:00:00.000',getdate())
This query
declare @epoch datetime = '4 July 2014 00:00:01.000'
select [raw] = @epoch ,
[cast] = cast(@epoch as int) ,
[datediff] = datediff(day,'1 Jan 1900',@epoch) ,
[highorder-word] = convert(int,substring( convert(varbinary(8),@epoch),1,4) ) ,
[low-order-word] = convert(int,substring( convert(varbinary(8),@epoch),4,8) )
Produces the following results:
raw cast datediff highorder-word low-order-word
----------------------- ----- -------- -------------- --------------
2014-07-04 00:00:01.000 41822 41822 41822 300
[You'll notice that 1 second is exactly 300 ticks. There are historic reasons for why SQL Server counts time-of-day such an odd way. As I understand it, it goes back to the days of Sybase SQL Server and the poor clock resolution on early Windows and OS/2 boxen.
Given all that, you can get the same count (days since 1900) like this in C#:
public string SqlDate
{
get { return DaysSince1900( DateTime.Now ).ToString() ; }
}
private int DaysSince1900( DateTime now )
{
TimeSpan period = now.Date - SqlServerEpoch ;
return period.Days ;
}
static readonly DateTime SqlServerEpoch = new DateTime( 1900 , 1 , 1 ) ;