As part of a retail closing process, there is a transactional stored procedure that selects from each of 18 tables and inserts them into a separate database for later mainframe processing. This procedure is showing some strange timing behavior, and I think it is because of a fundamental misunderstanding of the way transactions work in SQL Server.
I recognize that this isn't the best architecture for this problem, and the new solution is being developed, but in the meantime, I need to improve this process.
The stored procedure is running based on user request, and looks something like this:
BEGIN TRANSACTION
INSERT INTO Table1
(Column1,
Column2,
Column3,
Column4,
Column5,
Column6,
Column7,
Column8)
SELECT
Column1,
Column2,
Column3,
Column4,
Column5,
Column6,
Column7,
Column8
FROM
OLTPTable T
INNER JOIN
LookupTable1 L
ON
T.Foreign = L.Key
INSERT INTO Table2
(Column1,
Column2,
Column3)
SELECT
Column1,
Column2,
Column3
FROM
OLTPTable2 T
INNER JOIN
LookupTable2 L
ON
T.Foreign = L.Key
INSERT INTO Table3
(Column1,
Column2,
Column3,
Column4,
Column5,
Column6)
SELECT
Column1,
Column2,
Column3,
Column4,
Column5,
Column6
FROM
OLTPTable3 T
INNER JOIN
LookupTable3 L
ON
T.Foreign = L.Key
-- Through Table 18 and OLTP Table 18
COMMIT TRANSACTION
The logging looks something like this:
Table1 0.2 seconds 354 rows
Table2 7.4 seconds 35 rows
Table3 3.9 seconds 99 rows
There isn't a clear correlation between quantity of rows or complexity of joins and time.
My question is - on a long procedure like this, what is the effect of the transaction? Does is lock all the tables in the subselects at the beginning? One at a time? Is it waiting for the source table to be available for a lock, which is causing the waits?