0

Consider a table 'Orders' with 4 columns

OrderNumber : DealerNumber : Code : Time

There's a non clustered index on OrderNumber, DealerNumber and Code. Time does not have on

So

select * from Orders where OrderNumber = 10 and DealerNumber = 20

is pretty fast. But the bigger the table gets the more duplicates are retrieved. An order from 2010 can be found and one from 2014. So I changed the query into

select *
from Orders
where OrderNumber = 10 and DealerNumber = 20 and [Time] > getdate() - 180

But this will change the execution time of the query from < 1 second to >30 minutes so thats not an option.

I heard something about filtered indexes. Would it be wise to put a filtered index on [Time] column? If so will generating the index take a long time?

Or are there better options to let the second query run very fast?

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
Serve Laurijssen
  • 9,266
  • 5
  • 45
  • 98

1 Answers1

0

The index that you should be thinking about for this query is Orders(OrderNumber, DealerNumber, [Time]).

However, I am not sure why the first version would be so fast and the second so slow. Here are three possibilities:

  • You are measuring the performance by the time to the first record being returned, and the first query really does take a long time.
  • There is an index on time and the optimizer is getting confused because and using that index instead.
  • Some other process has locked the table.

I don't think a filtered index would be particularly helpful for this query.

By the way, getdate() has a time component on the date. So you might want: `[Time] > cast(getdate() - 180 as date).

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
  • you were right, it had nothing to do with the [Time] columns, when I tested it a while ago it was something else that accidently caused the slowness and I assumed it was this. Doesn't matter that the question is void, learned a few SQL things anyway – Serve Laurijssen Jan 06 '15 at 07:39