To avoid the pain of creating a 15TB index (which you could create on a new filegroup on the new disks, of course), you can instead create a new partitioned table (or not partitioned, if you don't need to manage / archive / purge old data ever) from scratch, start writing all new incoming data there, and slowly move the data over.
Borrowing from my own answer here:
- create a new partitioned table on the new disk with enough partitions to happily divide up all the existing and some-reasonable-time-into-the-future future data
- create a view that
union all
s the two tables (may be simplest to rename the current table and replace it, temporarily, with a view having the old table name)
- direct new writes to the new, partitioned table
- hopefully you control writes via a stored procedure or minimal places where you change the target name
- if not, you can use an instead of trigger on the view to direct writes to the partitioned table
- in the background, start
delete top (@batchsize) output deleted.* into new_table from old_table
- this doesn't matter how long it takes to get everything done, the trick is to optimize the batch size so it's a balance of getting things done and not causing blocking for too long, and to make sure you put in some log backups between every n batches if they're not already scheduled frequently enough (some more info on that here)
- once all the data is backfilled, you can drop the old table, then change the view to no longer
union all
(or get rid of it and rename the new table)
If it takes two weeks to backfill all the old data into the new partition scheme, so what? Users aren't waiting for two weeks; they're only ever waiting on any individual batch (and even then, that'll largely just be the folks querying the data that's moving over, not the new data).