An index with all the keys near 900 bytes would be very large and very deep (very few keys per page result in very tall B-Trees).
It depends on how you plan to query the values. An index is useful in several cases:
- when a value is probed. This is the most typical use, is when an exact value is searched in the table. Typical examples are
WHERE column='ABC'
or a join condition ON a.column = B.someothercolumn
.
- when a range is scanned. This is also fairly typical when a range of values is searched in the table. Besides the obvious example of
WHERE column BETWEEN 'ABC' AND 'DEF'
there are other less obvious examples, like a partial match: WHERE column LIKE 'ABC%'
.
- an ordering requirement. This use is less known, but indexes can help a query that has an explicit
ORDER BY column
requirement to avoid a stop-and-go sort, and also can help certain hidden sort requirement, like a ROW_NUMBER() OVER (ORDER BY column)
.
So, why do you need the index for? What kind of queries would use it?
For range scans and for ordering requirements there is no other solution but to have the index, and you will have to weigh the cost of the index vs. the benefits.
For probes you can, potentially, use hash to avoid indexing a very large column. Create a persisted computed column as column_checksum = CHECKSUM(column)
and then index on that column. Queries have to be rewritten to use WHERE column_checksum = CHECKSUM('ABC') AND column='ABC'
. Careful consideration would have to be given to weighing the advantage of a narrow index (32 bit checksum) vs. the disadvantages of collision double-check and lack of range scan and order capabilities.
after the comment
I once had a similar problem and I used a hash column. The value was too large to index (>1K) and I also needed to convert the value into an ID to store (basically, a dictionary). Something along the lines:
create table values_dictionary (
id int not null identity(1,1),
value varchar(8000) not null,
value_hash = checksum(value) persisted,
constraint pk_values_dictionary_id
primary key nonclustered (id));
create unique clustered index cdx_values_dictionary_checksum on (value_hash, id);
go
create procedure usp_get_or_create_value_id (
@value varchar(8000),
@id int output)
begin
declare @hash = CHECKSUM(@value);
set @id = NULL;
select @id = id
from table
where value_hash = @hash
and value = @value;
if @id is null
begin
insert into values_dictionary (value)
values (@value);
set @id = scope_identity();
end
end
In this case the dictionary table is organized as a clustered index on the values_hash
column which groups all the colliding hash values together. The id
column is added to make the clustered index unique, avoiding the need for a hidden uniqueifier column. This structure makes the lookup for @value
as efficient as possible, w/o a hugely inefficient index on value
and bypassing the 900 character limitation. The primary key on id
is non-clustered which means that looking up the value
from and id
incurs the overhead of one extra probe in the clustered index.
Not sure if this answers your problem, you obviously know more about your actual scenarios than I do. Also, the code does not handle error conditions and can actually insert duplicate @value entries, which may or may not be correct.