0

I have a database table (called Master) which has about 40 columns. 11 of them always contains the constant values for about every 100.000 rows.

The down side of this structure is that, when I need to update those 11 columns values, I need to go and update all 100.000 rows.

I could move all the constant data in a different table, and update it only one time, in one place, instead of 100.000 places.

However, if I do it like this, when I display the fields, I need to create INNER JOIN's between the two tables, which I know makes the SELECT statement slower.

I must say that updating the columns occurs more rarely than reading (displaying) the data.

How you suggest that I should store the data in database to obtain the best performances?

Thanks

Iwalu
  • 5
  • 1
  • 3
  • I know this might not be useful but I think it would be prudent to run some form of performance analysis on both mechanisms. The performance impact might be negligible next to the gains in maintenance. See http://stackoverflow.com/questions/2726657/inner-join-vs-left-join-performance-in-sql-server for a discussion regarding joins. – Kenneth Clark Apr 22 '13 at 15:42

1 Answers1

0

I would have the 11 columns in a separate table

You are bloating the main table with 11 extra values that are static: this affects the density (rows per 8k page) and the memory footprint of the query.

That is, you simply read and process in a lot less data for the main table if you split out the static data

gbn
  • 422,506
  • 82
  • 585
  • 676