2.8 million rows is a lot of data! It's going to take a "long" (long being relative - 10 seconds is reasonable for 2.8 million rows) time no matter what. If you want to improve it, return less data.
There are 2 options that might yield marginal improvement:
- Create a
CLUSTERED
on the Time
column. Note a table is only allowed 1 clustered index. So this might not be a good when other factors are considered.
- Create an index on
Time
that INCLUDE
s TransactionNumer
But either way your problem is not the query/index, it's the amount of data.
Additional Info
- Using 2 separate indexes as you have is not going to be of any help. Your query is only able to use 1 index (whichever it considers is best).
- I have assumed that your
TransactionNumber
really is a numeric data type and not one of the horrific "string numbers" I see far too often. If you are using some form of varchar
to store your TransactionNumber
you would do well to change that to int
or bigint
. These types require less bytes to store ==> meaning less data ==> meaning better performance.
- Your query currently has a bug unless you truncate the time portion of your
Time
column. Clearly you intend returning all transactions for the year. But currently you'll miss out on all transactions for the 31 December that did not happen at 00:00
. To fix this Time range queries should be filtered as follows:
Time >= StartOfRange and Time < StartOfNextRange
- E.g.
Time >= '2015-01-01' and Time < '2016-01-01'
- NB: Note Strictly
<
in the second condition. This includes all transactions before the very beginning of the next year.
Math
Some calculations on why I say 2.8 million rows is "a lot of data to return".
- You return
TransactionNumber
and Time
. I'll assume 4 and 8 bytes per row respectively. (It could be more but this is a very conservative estimate.)
- That's
12 bytes * 2,800,000 = 33,600,000 bytes
(approximately 32 MB).
- This is just the minimum data returned. Bear in mind that the data processed is much more. Including other columns of your table and the index trees.
- To transfer 33.6 MB of data in 10 seconds between server and client would require:
32 MB * 8 / 10 = 25.6
I.e. an average transfer rate of well over 25 megabits per second.
Conclusion
Whatever it is you're trying to do that requires 2.8 million rows possibly needs a rethink on your approach. However, if you're talking about a once/year job - then is 10 seconds really worth worrying about?