4

I've read several articles about using data types that support dates in SQL Server but I'm still undecided about what kind of data to use.

All the people says to use the new and powerful datetime2 type but smalldatetime is still more convenient in terms of memory (6 bytes vs 4 bytes)

This is my scope:

A table defined in this way:

Date: typeof(<Date type to choose>)
Value1: typeof(int)
Value2: typeof(int)
Value3: typeof(int)

Then I have these requirements:

for the Date column, I need precision up to minute.

In this table I will store up to 1 million records every day so big big data.

My question is: do I have to use the old smalldatetype that saves me 2 mb every day?

OR

Should I use the new and powerful datetime2 datatype?

Thanks

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Oscar Peli
  • 1,108
  • 2
  • 11
  • 18
  • 1
    Just pick the data type most appropriate for the range and precision you need and you'll get storage benefits as a side effect. If you don't need seconds, use smalldatetime. See https://dba.stackexchange.com/questions/160709/sql-datetime20-vs-datetime22/160713#160713 – Dan Guzman Jun 10 '17 at 12:59
  • I saw that article but still not make clear if it is better to gain 2 bytes on an old structure or pay 2 bytes with a modern structure – Oscar Peli Jun 10 '17 at 15:00

2 Answers2

7

datetime2(2) is 6 bytes and gets you not just seconds but up to 2 places of milliseconds (00:00:00.00 through 23:59:59.99).

smalldatetime is 4 bytes and has the potential to save you 2 bytes per row, if this column is part of a clustering key then those 2 bytes saved will have an impact on each nonclustered index.

One important feature of smalldatetime is that it rounds to the nearest minutes instead of truncating the seconds. Depending on how the data is inserted into the table and if this behavior is desired or not, you may need additional preproccessing of this value prior to inserting it into the table.

Example:

create table t (sdt smalldatetime, dt2 datetime2(2))
insert into t values 
 ('2017-01-01T11:22:22.33','2017-01-01T11:22:22.33')
,('2017-01-01T11:22:33.33','2017-01-01T11:22:33.33')

select 
    sdt = convert(char(23),sdt,121)
  , dt2 = convert(char(23),dt2,121) 
from t

rextester demo: http://rextester.com/JPMEE57778

returns:

+-------------------------+-------------------------+
|           sdt           |           dt2           |
+-------------------------+-------------------------+
| 2017-01-01 11:22:00.000 | 2017-01-01 11:22:22.33  |
| 2017-01-01 11:23:00.000 | 2017-01-01 11:22:33.33  |
+-------------------------+-------------------------+

I really dislike this rounding, so that would probably be enough for me to go with datetime2(2) despite the 2mb per day savings unless it was part of the clustering key.

Reference:

SqlZim
  • 37,248
  • 6
  • 41
  • 59
  • I don't need this kind of rounding because my data are normalized client side so seconds are always "00". Data are stored every 15 minutes – Oscar Peli Jun 10 '17 at 15:04
  • 1
    Rounding to the minute can be a PITA, especially because it can round *up* to "a time that not yet occurred" - and can break things that, for questionable reasons I shall not defend, assume stronger ordering.. ask me how I know D: Calling out this difference is worthy of several votes.. – user2864740 Oct 28 '18 at 19:31
3

Date ranges : smalldatetime: 1900-01-01 through 2079-06-06 and datetime2: 0001-01-01 through 9999-12-31

Storage :If you don't need seconds, just hours and minutes then storage is like smalldatetime at 4 bytes, as opposed to datetime2(0) at 6 bytes.

datetime2 is an upgrade in range of values, precision (no rounding!), so you should go with datetime2 for big data.