It depends on the scenario here. I recommend the following steps in order to decide where to move next.
Find most expensive queries
Using the following SQL to determine the most expensive queries:
SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC -- logical reads
-- ORDER BY qs.total_logical_writes DESC -- logical writes
-- ORDER BY qs.total_worker_time DESC -- CPU time
Execution plan
This could help to determine what is going on with your actual query. More information could be found here.
Performance tips
- Indexes. Remove all indexes, except for those needed by the insert (
SELECT INTO
)
- Constraints and triggers. Remove them from the table.
- Choosing good clustered index. New records will be inserted at the end of the table.
- Fill factor. Set it to 0 or 100 (the same as 0). This will reduce the number of pages that the data is spread across.
- Recovery model. Change it to
Simple
.
Also
Consider reviewing Insert into table select * from table vs bulk insert.