3

I have a relatively wide table (50 columns). Of those, the first 5 represent the scope of a certain subset of data and the last 45 represent properties (mapped at deployment time, so static). The users have the possibility (through a query builder) to query against such table and any combination of its property columns.

I am now wondering what would be the best strategy to build indexes for this table. Performance-wise, would it be better to create a large number of "small" indexes or a small number of indexes spanning many columns?

Andrea
  • 884
  • 1
  • 10
  • 23

1 Answers1

3

If the table is not heavily updated(updates, inserts, deletes) the beste strategy is to create bitmap indexes for every row. The bitmap indexes are small and are the best for combining where conditions.

If the table is updated considerable you should probably rely on creating normal(btree) indexes on the columns most queryed(after an analysis on all queries.)

Florin Ghita
  • 17,525
  • 6
  • 57
  • 76
  • The table is going to receive inserts with a reasonable frequency (even in perspective, we are not going to exceed 1 insertion per second, and this is a very high end case). No updates and almost no deletes (only for housekeeping operations). Given these additional information, would you suggest to go for bitmap or btree? – Andrea Sep 24 '13 at 08:52
  • the inserts have some date column? (Do You know which are the latest records?) – Florin Ghita Sep 24 '13 at 08:55
  • Yes, there is a timestamp column and inserts are synchronized with database time – Andrea Sep 24 '13 at 08:57
  • 1
    in this case if you have partitioning option, you may partition the table in a daily manner upon timestamp, use local bitmap indexes for all partitions except the last, mark the index partition for the last partition as unusable, and your queries will full scan the last partition and use indexes for the other, massive part of the table. – Florin Ghita Sep 24 '13 at 09:29