I have a table [groupmembers] with three columns:
[dn] NVARCHAR(255)
[objectGUID] NVARCHAR(38)
[member] NVARCHAR(255)
The source of the data is a table where values are stored like such:
<dn>,<objectGUID>,<member|member|member|member|member>
I have an SSIS package which splits out each of the delimited values for [member] and writes a new row for each member, but keeping the same [dn] and [objectGUID].
So, all of the rows containing the members of each [dn]:[objectGUID] pair will always be written sequentially. The table will be truncated every time it is loaded.
The [dn] and [objectGUID] are different unique values representing the same source object. I have them both because two different applications will each use one to query for [member]. Furthermore, the queries against [dn] OR [objectGUID] will (nearly) always return every associated [member].
Due to the fact that the [members] will (nearly) always be returned together, would it make sense to create a clustered index on one of [dn] or [objectGUID]?
My assumption is that if the rows are written together, clustered with their like values, and read together, that the performance would be better than with a non-clustered index.