2

I have an arbitrary stored procedure usp_DoubleCheckLockInsert that does an INSERT for multiple clients and I want to give the stored procedure exclusive access to writing to a table SomeTable when it is within the critical section Begin lock and End lock.

CREATE PROCEDURE usp_DoubleCheckLockInsert
     @Id INT
    ,@SomeValue INT
AS
BEGIN
    IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = @Id AND SomeValue = @SomeValue)) RETURN
    BEGIN TRAN
        --Begin lock
        IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = @Id AND SomeValue = @SomeValue)) ROLLBACK

        INSERT INTO SomeTable(Id, SomeValue)
        VALUES(@Id,@SomeValue);
        --End lock
    COMMIT
END

I have seen how Isolation Level relates to updates, but is there a way to implement locking in the critical section, give the transaction the writing lock, or does TSQL not work this way?

Obtain Update Table Lock at start of Stored Procedure in SQL Server

Community
  • 1
  • 1
Dustin Kingen
  • 20,677
  • 7
  • 52
  • 92
  • 1
    Why not just create a unique constraint on `Id, SomeValue`? See [Only inserting a row if it's not already there](http://stackoverflow.com/q/3407857/73226) – Martin Smith Apr 30 '13 at 11:33
  • That is a way to fix the problem at the data level, but in my case some records have a soft delete flag called `IsDelete`, so multiple records of the same `Id` and `SomeValue` are allowed. – Dustin Kingen Apr 30 '13 at 11:38
  • 4
    You can use a unique filtered index to enforce this. `CREATE UNIQUE INDEX IX ON SomeTable(Id,SomeValue) WHERE IsDelete = 0` – Martin Smith Apr 30 '13 at 11:38
  • Is this really preferable over double check locking? This will cause an error on insert versus simply returning to the caller which means I'll need a `BEGIN TRY` block and all the associated `BEGIN CATCH` cruft. – Dustin Kingen Apr 30 '13 at 11:46
  • 1
    If this is the only way users can access this table you might want to look into sp_getapplock http://msdn.microsoft.com/en-us/library/ms189823.aspx – JStead Apr 30 '13 at 11:56
  • @MartinSmith I think a unique index is good to ensure duplicates do not get inserted into the table, but I would rather be proactive than reactive when possible. – Dustin Kingen Apr 30 '13 at 12:28
  • 1
    "Is this really preferable over double check locking?" - yes, yes it is. After all the contortions you'll go through, SQL Server will still apply all of *its* normal locking mechanisms. In this case, catching the error *is* the preferred way to work. – Damien_The_Unbeliever May 02 '13 at 14:56

5 Answers5

2

A second approach which works for me is to combine the INSERT and the SELECT into a single operation.

This index needed only for efficiently querying SomeTable. Note that there is NOT a uniqueness constraint. However, if I were taking this approach, I would actually make the index unique.

CREATE INDEX [IX_SomeTable_Id_SomeValue_IsDelete] ON [dbo].[SomeTable]
(
    [Id] ASC,
    [SomeValue] ASC,
    [IsDelete] ASC
)

The stored proc, which combines the INSERT/ SELECT operations:

CREATE PROCEDURE [dbo].[usp_DoubleCheckLockInsert]
     @Id INT
    ,@SomeValue INT
    ,@IsDelete bit
AS
BEGIN

        -- Don't allow dirty reads

        SET TRANSACTION ISOLATION LEVEL READ COMMITTED

    BEGIN TRAN
        -- insert only if data not existing
        INSERT INTO dbo.SomeTable(Id, SomeValue, IsDelete)
        SELECT @Id, @SomeValue, @IsDelete
        where not exists (
            select * from dbo.SomeTable WITH (HOLDLOCK, UPDLOCK)
            where Id = @Id
            and SomeValue = @SomeValue
            and IsDelete = @IsDelete)

    COMMIT
END

I did try this approach using multiple processes to insert data. (I admit though that I didn't exactly put a lot of stress on SQL Server). There were never any duplicates or failed inserts.

chue x
  • 18,573
  • 7
  • 56
  • 70
  • Both of your answers are correct. I will accept the one you feel better follows SQL conventions. – Dustin Kingen May 06 '13 at 11:43
  • @Romoku - to me they would both be equally fine. Maybe you should wait a bit and see which gets more votes. Of course for this answer, I would create a unique index (as I suggest above in the answer itself). – chue x May 06 '13 at 11:59
  • @Romoku - upon further testing I found that my original SQL would occasionally fail because it tried to insert a record that was already there. The fix for this is to add the table hints (`holdlock`, `updlock`) in the sub-select. The `holdlock` is equivalent to `serializable`, which single threads all access. Also, according to MSDN if there are table hints, then any `TRANSACTION ISOLATION LEVEL` statements are ignored for those tables. I have updated my answer to reflect my findings. – chue x May 07 '13 at 14:25
1

It seems all you are trying to do is to prevent duplicate rows from being inserted. You can do this by adding a unique index, with the option IGNORE_DUP_KEY = ON:

CREATE UNIQUE INDEX [IX_SomeTable_Id_SomeValue_IsDelete]
ON [dbo].[SomeTable]
(
    [Id] ASC,
    [SomeValue] ASC,
    [IsDelete] ASC
) WITH (IGNORE_DUP_KEY = ON)

Any inserts with duplicate keys will be ignored by SQL Server. Running the following:

INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(0,0,0)

INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(1,1,0)

INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(2,2,0)

INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(0,0,0)

Results in:

(1 row(s) affected)

(1 row(s) affected)

(1 row(s) affected)

Duplicate key was ignored.
(0 row(s) affected)

I did not test the above using multiple processes (threads), but the results in that case should be the same - SQL Server should still ignore any duplicates, no matter which thread is attempting the insert.

See also Index Options at MSDN.

chue x
  • 18,573
  • 7
  • 56
  • 70
0

I think I may not understand the question but why couldn't you do this:

begin tran
if ( not exists ( select 1 from SomeTable where Id = @ID and SomeValue = @SomeValue ) )
    insert into SomeTable ( Id, SomeValue ) values ( @ID, @SomeValue )
commit

Yes you have a transaction every time you do this but as long as your are fast that shouldn't be a problem.

I have a feeling I'm not understanding the question.

Jeff.

Jeff B
  • 535
  • 1
  • 6
  • 15
  • But if you are running concurrently, shouldn't everybody else wait until you release the transaction with the commit? – Jeff B May 02 '13 at 18:34
0

As soon as you start messing with sql preferred locking management, you are taking the burdon on, but if you're certain this is what you need, update your sp to select a test variable and replace your "EXISTS" check with that variable. When you query the variable use an exclusive table lock, and the table is yours till your done.

CREATE PROCEDURE usp_DoubleCheckLockInsert
     @Id INT
    ,@SomeValue INT
AS
BEGIN
     IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = @Id AND SomeValue = @SomeValue)) RETURN
     BEGIN TRAN
        --Begin lock
         DECLARE @tId as INT
         -- You already checked and the record doesn't exist, so lock the table
         SELECT @tId 
         FROM SomeTable WITH (TABLOCKX)
         WHERE Id = @Id AND SomeValue = @SomeValue
         IF @tID IS NULL
           BEGIN
             -- no one snuck in between first and second checks, so commit
           INSERT INTO SomeTable(Id, SomeValue)
           VALUES(@Id,@SomeValue);
        --End lock
     COMMIT
END

If you execute this as a query, but don't hit the commit, then try selecting from the table from a different context, you will sit and wait till the commit is enacted.

Bill Melius
  • 1,043
  • 7
  • 8
0

Romoku, the answers you're getting are basically right, except

  • that you don't even need BEGIN TRAN.
  • you don't need to worry about isolation levels.

All you need is a simple insert ... select ... where not exists (select ...) as suggested by Jeff B and Chue X.

Your concerns about concurrency ("I'm talking about concurrency and your answer will not work.") reveal a profound misunderstanding of how SQL works.

SQL INSERT is atomic. You don't have to lock the table; that's what the DBMS does for you.

Instead of offering a bounty for misbegotten questions based on erroneous preconceived notions -- and then summarily dismissing right answers as wrong -- I recommend sitting down with a good book. On SQL. I can suggest some titles if you like.

James K. Lowden
  • 7,574
  • 1
  • 16
  • 31
  • Jeff never suggested the `insert ... select ... where not exists (select ...)` which would work. You are correct that I did not realize `INSERT` is atomic, so combining the duplicate checking logic into the `INSERT` statement is the proactive approach I was seeking. – Dustin Kingen May 06 '13 at 11:41