Assuming that a table contains sufficient information to warrant an index seek, at what cardinality will SQL Server (or PostgreSQL) opt for an index scan?
The reason I ask this is I previously posted a question (link) in which two queries performed at the same speed, yet one didn't attempt to use the index on the processed columns. After SQL Server suggested I put a covering index that included the columns being queried (it suggested this for both queries), I started looking for reasons as to why it would make such a strange suggestion.
I experimented with making the indexes covering and composite, but both executed in the same time (we're talking 3 million rows).
Finally I concluded it was because of the ultra-high cardinality of the data. Every row is unique. I am deducing this caused SQL server to choose an index scan. However, the query stated "WHERE Col1 > ? AND Col2 < ?", so this is a little confusing.
My questions are:
- At what cardinality will a RDBMS always opt for an index scan?
- Can anyone explain why SQL Server would not use the index when the WHERE statement would indicate this would make sense?
I have attached the execution plan.