We have a large database that has several tables with 10-50 million of rows each. But in reality we just need the data for the past 3 years only. So we created new tables to contain only the latest data. They tables are exactly like the original ones e.g. contains the same indexes on same partition ... same everything.
And everything went perfect. The table records count is now ~10-15 times smaller than the original size. Initial perf. measures showed significant gain, but then we found that some other stored procedures perform worst than previous - now they take 100% more time e.g. from ~2 minutes to ~4.
The table swapping was done via sp_rename
.
We rebuilt all the indexes, rebuilt even the statistics, but the effect was actually very small.
Update: we cleared all execution plans via:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
GO
Fortunatelly for me when I get back to the original tables, the problematic Stored Procedures starts to work fast as before. Right now I am comparing the execution plans, but it is pain because those SPs are huge.
Any help will be appreciate.