I've written this rather simple query in which I'm selecting the records that are less than 5 minutes old, based on a DateTimeOffset column called LastActivity.
I'm using EF and when inspecting the actual query I see that EF actually translates this condition:
LastActivity > DateTimeOffset.UtcNow.AddMinutes(-5);
to a query using the SQL datetime functions: [s].[LastActivity] > DATEADD(minute, CAST(-5.0E0 AS int), CAST(SYSUTCDATETIME() AS datetimeoffset))
.
As you can see, it does some unneccessary casting (e.g. .AddMinutes expects a double), so I was wondering if it would be more performant to actually calculate the DateTime first in code and then pass the result to the query as a parameter. I know this would then depend on the statistics and I can't really say how these values will be distributed yet... I've run both queries on a sample database and there is no real difference in performance, but when the dataset increases I assume this might change.
My question is: am I correct in assuming that when there are no parameters (but using DATEADD), SQL will always use the same query plan or will it somehow optimise it because we are using SYSUTCDATETIME?