I like the sp_getapplock
and use this method myself in few places for its flexibility and that you have full control over the locking logic and wait times.
The only problem that I see is that in your case concurrent processes are not all equal.
You have SP1 that moves data from the staging table into the main table. Your system never tries to run several instances of this SP.
Another SP2 that inserts data into the staging table can be run several times simultaneously and it is fine to do it.
It is easy to implement the locking that would prevent any concurrent run of any combination of SP1 or SP2. In other words, it is easy if the locking logic is the same for SP1 and SP2 and they are treated equal. But, then you can't have several instances of SP2 running simultaneously.
It is not obvious how to implement the locking that would prevent concurrent run of SP1 and SP2, while allowing several instances of SP2 to run simultaneously.
There is another approach that doesn't attempt to prevent concurrent run of SPs, but embraces and expects that simultaneous runs are possible.
One way to do it is to add an IDENTITY
column to the staging table. Or an automatically populated datetime if you can guarantee that it is unique and never decreases, which can be tricky. Or rowversion
column.
The logic inside SP2 that inserts data into the staging table doesn't change.
The logic inside SP1 that moves data from the staging table into the main table needs to use these identity values.
At first read the current maximum value of identity from the staging table and remember it in a variable, say, @MaxID
. All subsequent SELECTs, UPDATEs and DELETEs from the staging table in that SP1 should include a filter WHERE ID <= @MaxID
.
This would ensure that if there happen to be a new row added to the staging table while SP1 is running, that row would not be processed and would remain in the staging table until the next run of SP1.
The drawback of this approach is that you can't use TRUNCATE
, you need to use DELETE
with WHERE ID <= @MaxID
.
If you are OK with several instances of SP2 waiting for each other (and SP1), then you can use sp_getapplock
similar to the following. I have this code in my stored procedure. You should put this logic into both SP1 and SP2.
I'm not calling sp_releaseapplock
explicitly here, because the lock owner is set to Transaction and engine will release the lock automatically when transaction ends.
You don't have to put retry logic in the stored procedure, it can be within external code that runs these stored procedures. In any case, your code should be ready to retry.
CREATE PROCEDURE SP2 -- or SP1
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
-- Maximum number of retries
DECLARE @VarCount int = 10;
WHILE (@VarCount > 0)
BEGIN
SET @VarCount = @VarCount - 1;
DECLARE @VarLockResult int;
EXEC @VarLockResult = sp_getapplock
@Resource = 'StagingTable_app_lock',
-- this resource name should be the same in SP1 and SP2
@LockMode = 'Exclusive',
@LockOwner = 'Transaction',
@LockTimeout = 60000,
-- I'd set this timeout to be about twice the time
-- you expect SP to run normally
@DbPrincipal = 'public';
IF @VarLockResult >= 0
BEGIN
-- Acquired the lock
-- for SP2
-- INSERT INTO StagingTable ...
-- for SP1
-- SELECT FROM StagingTable ...
-- TRUNCATE StagingTable ...
-- don't retry any more
BREAK;
END ELSE BEGIN
-- wait for 5 seconds and retry
WAITFOR DELAY '00:00:05';
END;
END;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
-- log error
END CATCH;
END
This code guarantees that only one procedure is working with the staging table at any given moment. There is no concurrency. All other instances will wait.
Obviously, if you try to access the staging table not through these SP1 or SP2 (which try to acquire the lock first), then such access will not be blocked.