I am working on a project with 2 applications developed in C# (.NET framework 4) -
- WCF service (exposed to customers)
- ASP.NET webforms application (Head office use).
Both applications can select and update rows from a common “accounts” table in a SQL Server 2005 database. Each row in the accounts table holds a customer balance.
The business logic for a request in both applications involves selecting a row from "accounts" table, doing some processing based on the balance, followed by updating the balance in the database. The process between selecting and updating of the balance cannot participate in a transaction.
I realized it is possible between selecting the row and updating it, the row could be selected and updated by the another request from same or different application.
I found this issue described in the below article. I am referring to 2nd scenario of "lost update". http://www.codeproject.com/Articles/342248/Locks-and-Duration-of-Transactions-in-MS-SQL-Serve
The second scenario is when one transaction (Transaction A) reads a record and retrieve the value into a local variable and that same record will be updated by another transaction (Transaction B). And later Transaction A will update the record using the value in the local variable. In this scenario the update done by Transaction B can be considered as a "Lost Update".
I am looking for a way to prevent the above situation and to prevent balance from becoming negative if multiple concurrent requests are received for the same row. A row should be selected and updated by only a single request (from either application) at a time to ensure the balance is consistent.
I am thinking along the lines of blocking access to a row as soon as it has been selected by one request. Based on my research below are my observations.
- Isolation levels
With 'Repeatable read' isolation level it is possible for 2 transactions to select a common row.
I tested this be opening 2 SSMS windows. In both windows I started a transaction with Repeatable read isolation level followed by select on a common row. I was able to select the row in each transaction.
Next I tried to update the same row from each transaction. The statements kept running for few seconds. Then the update from 1st transaction was successful while the update from 2nd transaction failed with the below message.
Error 1205 : Transaction (Process ID) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
So if I am using transaction with Repeatable read it should not be possible for 2 concurrent transactions to update the same row. Sql server automatically chooses to rollback 1 transactions. Is this correct?
But I would also like to avoid the deadlock error by allowing a particular row to be selected by a single transaction only.
- Rowlock
I found the below answer on Stackoverflow that mentioned use of ROWLOCK
hint to prevent deadlock. (see the comment of the accepted answer).
Minimum transaction isolation level to avoid "Lost Updates"
I started a transaction and used a select statement with ROWLOCK
and UPDLOCK
. Then in a new SSMS window, I started another transaction and tried to use the same select query (with same locks). This time I was not able to select the row. The statement kept running in the new SSMS window.
So use of Rowlock with transactions seems to be blocking rows for select statements which use the same lock hints.
I would appreciate it if someone could answer the below questions.
Are my observations regarding isolation levels and rowlock correct?
For the scenario that I described should I use
ROWLOCK
andUPDLOCK
hints to block access to a row? If not what is the correct approach?
I am planning to place my select and update code in a transaction. The first select query in the transaction will use the ROWLOCK
and UPDLOCK
hints. This will prevent the record from being selected by another transaction that uses select with the same locks to retrieve the same row.