I have a very large read-only DB with ~30 tables. The total size of the DB is around 13TB, with the largest table around 4.5 TB. (There are 10 or so tables with size 1TB+, and then several smaller tables.) Currently the DB is split up into 8 data files, all in the PRIMARY filegroup.
I've applied PAGE level data compression on some of the large tables, which has reduced the size of the DB to 10TB or so, however, I'd really like to reclaim some of the space on disk. (This data set is READ-ONLY - it will never grow.)
I realize that shrinking the files will cause tons of fragmentation, which could be fixed by rebuilding all the indexes, but rebuilding the indexes will probably cause the files to grow all over again...argh!
Which leads to my question(s) about how to reclaim disk space after compression:
- Is my only solution to copy all the tables / data into new filegroups with smaller files, drop the original tables, and then empty / drop or shrink the original files?
- is anyone aware of any script or tool that will help me decide the optimum file sizes I need?
- Would the best practice be to
- create new table on new filegroup with clustered index + PAGE compression
- insert / select from original table into new table (with TF 610 and tablock)
- drop original table
- Create non-clustered indexes on new filegroup
This seems like a big undertaking that will take a long time because I'm going to have to basically re-create my entire database...again. Is there a simpler solution that I'm missing?