7

I am writing a procedure that will be reconciling finical transactions on a live database. The work I am doing can not be done as a set operation so I am using two nested cursors.

I need to take a exclusive lock on the transaction table while I am reconciling per client, but I would like to release the lock and let other people run their queries in between each client I process.

I would love to do a exclusive lock on a row level instead of a table level, but what I have read so far says I can not do with (XLOCK, ROWLOCK, HOLDLOCK) if the other transactions are running at READCOMMITED isolation level (which it is for me).

Am I taking a table level exclusive lock correctly, and is there any way in Server 2008 R2 to make row level exclusive locks work the way I want to without modifying the other queries running on the database?

declare client_cursor cursor local forward_only for 
     select distinct CLIENT_GUID from trnHistory
open client_cursor

declare @ClientGuid uniqueidentifier
declare @TransGuid uniqueidentifier

fetch next from client_cursor into @ClientGuid
WHILE (@@FETCH_STATUS <> -1)
BEGIN
    IF (@@FETCH_STATUS <> -2)
    BEGIN
        begin tran

        declare @temp int

        --The following row will not work if the other connections are running READCOMMITED isolation level
        --select @temp = 1 
    --from trnHistory with (XLOCK, ROWLOCK, HOLDLOCK) 
    --left join trnCB with (XLOCK, ROWLOCK, HOLDLOCK) on trnHistory.TRANS_GUID = trnCB.TRANS_GUID
    --left join trnClients with (XLOCK, ROWLOCK, HOLDLOCK) on trnHistory.TRANS_GUID = trnClients.TRANS_GUID
    --(Snip) --Other tables that will be "touched" during the reconcile
    --where trnHistory.CLIENT_GUID = @ClientGuid

        --Works allways but locks whole table.
    select top 1 @temp = 1 from trnHistory with (XLOCK, TABLOCK) 
    select top 1 @temp = 1 from trnCB with (XLOCK, TABLOCK)
    select top 1 @temp = 1 from trnClients with (XLOCK, TABLOCK)
    --(Snip) --Other tables that will be "touched" during the reconcile

        declare trans_cursor cursor local forward_only for 
                select TRANS_GUID from trnHistory where CLIENT_GUID = @ClientGuid order by TRANS_NUMBER
        open trans_cursor

        fetch next from trans_cursor into @TransGuid
        WHILE (@@FETCH_STATUS <> -1)
        BEGIN
            IF (@@FETCH_STATUS <> -2)
            BEGIN

                --Do Work here

            END
            fetch next from trans_cursor into @TransGuid
        END

        close trans_cursor
        deallocate trans_cursor

            --commit the transaction and release the lock, this allows other 
            -- connections to get a few queries in while it is safe to read.
        commit tran
    END

    fetch next from client_cursor into @ClientGuid
END 

close client_cursor
deallocate client_cursor
TylerH
  • 20,799
  • 66
  • 75
  • 101
Scott Chamberlain
  • 124,994
  • 33
  • 282
  • 431
  • I'm trying to figure out why you need an exclusive lock. Are other people likely to be inserting records? Other people updating records? Are you worried about other people getting an inconsistent view of the data? – Laurence Nov 17 '12 at 23:57
  • @Laurence I am worried about other people getting a inconsistent view state. I am attempting to correct a error that affected a small % of clients, however that correction process leaves several interdependent rows in several tables (I actually will be locking on 5 tables but simplified my code example to one table) in a inconsistent state during the correction process. The inconsistency is isolated per client, but doing a `SELECT SUM(ColA) FROM trnHistory` over that one client would return a incorrect value during the "correction" process. So I need to take a exclusive lock to prevent reads. – Scott Chamberlain Nov 18 '12 at 00:21
  • I don't understand why a transaction wouldn't protect you from this, unless you have people doing read_uncommitted. – Laurence Nov 18 '12 at 00:23
  • @Laurence I know it does, my question is: "Am I taking a table level exclusive lock correctly, and is there any way in Server 2008 R2 to make row level exclusive locks work the way I want to without modifying the other queries running on the database?". As I said earlier I am working with 5 databases, I want to lock all 5 at the same moment in time instead of as i touch each one for the first time to preserve consistency. I will update my code example to show the other tables, as you made me realize i did make one mistake. – Scott Chamberlain Nov 18 '12 at 00:25
  • Are you saying the data is already inconsistent to start with, and you want to stop people reading it while you're fixing it? – Laurence Nov 18 '12 at 00:28
  • @Laurence No the data is consistent but wrong by a correctable algorithm. However to correct that offset a change must be made in the first link in the chain, breaking the "consistancy", and that correction propagated down the chain where each link could do a different thing to it. Once the whole chain has been corrected it is once again "Consistent". During that correction process I can not let reads happen (unless I can't stop them like read_uncomitted) – Scott Chamberlain Nov 18 '12 at 00:34
  • 1
    I'm probably being dense, but if you're only worried about other readers, isn't a transaction sufficient for this? begin transaction, make inconsistent, make consistent, commit transaction. Nobody sees the middle bit, unless they say they don't mind about inconsistent data. If you're worried about other writers, then I think row xlocks will be fine. You did mention 5 databases at one point though. There could be wrinkles with distributed transactions. – Laurence Nov 18 '12 at 01:32
  • When I said 5 databases I misspoke, i meant 5 tables. I can not edit the old comment. – Scott Chamberlain Nov 18 '12 at 01:34
  • @Laurence the more I think about it you are right. Turn that in to a answer and I will mark it accepted. – Scott Chamberlain Nov 18 '12 at 01:40

2 Answers2

16

I couldn't believe that an XLOCK would not block a concurrent reader at read committed so I just reproduced it: It is true. Script:

Session 1:

SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRAN

SELECT * FROM T WITH (ROWLOCK, XLOCK, HOLDLOCK /*PAGLOCK, TABLOCKX*/) WHERE ID = 123

Session 2:

SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRAN

SELECT * FROM T WHERE ID = 123

Plug in some table name that you have at hand. Session 2 is not being blocked.

I also tried using a PAGLOCK but that didn't work either. Next I tried a TABLOCKX but that didn't work either!

So your table-lock based strategy does not work. I think you'll have to modify the readers so that they either

  1. use snapshot isolation to get a consistent view (as of before any writes)
  2. use a higher isolation level to be blocked by the writer

Of course there is a nasty workaround to really, really lock the table: alter its schema. This will take a Sch-M lock which conflicts with basically any access to the table. It even holds of some metadata read operations. It could look like this:

--just change *any* setting in an idempotent way
ALTER TABLE T SET (LOCK_ESCALATION = AUTO)

I tested this to work.


Is SQL Server right not obeying the XLOCK? Or is this a flaw in the product? I think it is right because it conforms to the documented properties of READ COMMITTED. Also, even using SERIALIZABLE there are cases where one transaction can lock a row exclusively and another can read the same row! This can happen in the presence of indexes. One transaction might X-lock on the non-clustered index IX_T_SomeCol while another happily reads off of the clustered index PK_T.

So it is actually quite normal that transactions can execute independently even in the presence of exclusive locking.

usr
  • 168,620
  • 35
  • 240
  • 369
  • 2
    I've found that only If I use REPEATABLE READ OR SERIALIZABLE isolation levels while reading sql blocks it (if uncommited rowlock+ xlock exists) – SalientBrain Sep 18 '13 at 19:00
  • 1
    Yes, Only isolation level REPEATABLE READ OR SERIALIZABLE could block the session 2, because even though session obtains XLOCK, and even though u do not commit the transactions, session 1 already release XLOCK after its reading is finished. It did not hold it any longer. – Xin Aug 29 '15 at 16:11
  • @Xin it did hold it due to `HOLDLOCK`. This is identical to `SERIALIZABLE` and holds all locks until the end of the transaction. – usr Aug 29 '15 at 17:20
  • I do not see this behaviour. Tried this on SQL Server 2005 and it blocks as I would expect, not sure how you managed to get a transaction to read a row with an XLOCK but the lock compatibility matrix is pretty clear about X being incompatible with S (i.e. you can't read committed when there's an exclusive lock on the row). – Jim Mar 06 '17 at 12:50
  • @Jim I once read an authoritative blog post from one of the devs about it (Paul Randall?!). It is an optimization to make RC isolation *not* take an S lock for each row which is expensive. Maybe ancient 2005 does not have this, yet. Good to know. I certainly tested on >=2008. – usr Mar 10 '17 at 10:06
  • @usr You're right, my bad. Maybe I ended up reading the same Paul's article (Paul White) http://sqlblog.com/blogs/paul_white/archive/2010/11/01/read-committed-shared-locks-and-rollbacks.aspx I still can't reproduce it though, I must try a newer version. Very surprising result! But he does say, "SQL Server cannot apply the locking optimization if there is an uncommitted change on the containing page." – Jim Mar 11 '17 at 07:53
5

If you are only worried about other readers, then you shouldn't need exclusive locks, the pattern

Begin Transaction

  Make Data Inconsistent

  Make Data Consistent

Commit Transaction

Should be fine. The only sessions who will see inconsistent data are those that use nolock or Read Uncommitted, or those that expect to make multiple consistent reads without using Repeatable Rows or Serializable.

In answer to the question, the correct way to take an exclusive lock, in my opinion, is to arrange things so the engine does it for you.

Laurence
  • 10,896
  • 1
  • 25
  • 34