0

I need to build an history report which provide READ access only from a table with 150GB size. I would like to know should i create the column indexes for all the below:

  1. My report has 10 filters asking for user input.E.g. Datetime, Varchar. Should i create 10 indexes for all of them?
  2. My report has 10 default hard coded filter e.g. where code="RAM". Should i create another 10 indexes for all of them?

My main concern is the performance issue.

VeecoTech
  • 2,073
  • 8
  • 37
  • 49

1 Answers1

1

First thing, you say your primary concern is performance. Performance reading or performance writing?

Because in theory you could index a table to death which would make reads great, but they would need to be maintained and updated every time you inserted or updated a row.

Second select indicies should be based on which queries are frequently run against your table.

For example, imagine you have a table with columns ID, Type, TextValue. In your application you have a GetByID() method and a ListWithType() method. Your indicies should be based on what SQL is trying to filter/join by. In this case ID (your PK obviously) and a non-unique index on Type.

  • Decide which performance is important (read or write)
  • Watch your code and observe the common and slow queries
  • Create indicies based on the various filters/joins you're doing.

This is obviously a simplistic overview. I suggest you read up on how to read a SQL Execution Plan and change your indicies to improve it.

Finally, when you create indicies make sure you select an appropriate fill factor (this is based on how the table is used) or you'll be constantly rebuilding them.

Community
  • 1
  • 1
Liath
  • 9,913
  • 9
  • 51
  • 81
  • @belinq I assumed so, that's the most common problem and tailored most of my answer towards it. – Liath Sep 04 '14 at 08:50