2

I am working on a project with 2 applications developed in C# (.NET framework 4) -

  1. WCF service (exposed to customers)
  2. ASP.NET webforms application (Head office use).

Both applications can select and update rows from a common “accounts” table in a SQL Server 2005 database. Each row in the accounts table holds a customer balance.

The business logic for a request in both applications involves selecting a row from "accounts" table, doing some processing based on the balance, followed by updating the balance in the database. The process between selecting and updating of the balance cannot participate in a transaction.

I realized it is possible between selecting the row and updating it, the row could be selected and updated by the another request from same or different application.

I found this issue described in the below article. I am referring to 2nd scenario of "lost update". http://www.codeproject.com/Articles/342248/Locks-and-Duration-of-Transactions-in-MS-SQL-Serve

The second scenario is when one transaction (Transaction A) reads a record and retrieve the value into a local variable and that same record will be updated by another transaction (Transaction B). And later Transaction A will update the record using the value in the local variable. In this scenario the update done by Transaction B can be considered as a "Lost Update".

I am looking for a way to prevent the above situation and to prevent balance from becoming negative if multiple concurrent requests are received for the same row. A row should be selected and updated by only a single request (from either application) at a time to ensure the balance is consistent.

I am thinking along the lines of blocking access to a row as soon as it has been selected by one request. Based on my research below are my observations.

  1. Isolation levels

With 'Repeatable read' isolation level it is possible for 2 transactions to select a common row.

I tested this be opening 2 SSMS windows. In both windows I started a transaction with Repeatable read isolation level followed by select on a common row. I was able to select the row in each transaction.

Next I tried to update the same row from each transaction. The statements kept running for few seconds. Then the update from 1st transaction was successful while the update from 2nd transaction failed with the below message.

Error 1205 : Transaction (Process ID) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

So if I am using transaction with Repeatable read it should not be possible for 2 concurrent transactions to update the same row. Sql server automatically chooses to rollback 1 transactions. Is this correct?

But I would also like to avoid the deadlock error by allowing a particular row to be selected by a single transaction only.

  1. Rowlock

I found the below answer on Stackoverflow that mentioned use of ROWLOCK hint to prevent deadlock. (see the comment of the accepted answer).

Minimum transaction isolation level to avoid "Lost Updates"

I started a transaction and used a select statement with ROWLOCK and UPDLOCK. Then in a new SSMS window, I started another transaction and tried to use the same select query (with same locks). This time I was not able to select the row. The statement kept running in the new SSMS window.

So use of Rowlock with transactions seems to be blocking rows for select statements which use the same lock hints.

I would appreciate it if someone could answer the below questions.

  1. Are my observations regarding isolation levels and rowlock correct?

  2. For the scenario that I described should I use ROWLOCK and UPDLOCK hints to block access to a row? If not what is the correct approach?

I am planning to place my select and update code in a transaction. The first select query in the transaction will use the ROWLOCK and UPDLOCK hints. This will prevent the record from being selected by another transaction that uses select with the same locks to retrieve the same row.

Community
  • 1
  • 1
M Singh
  • 21
  • 4

1 Answers1

0

I would suggest SQL Isolation level of SNAPSHOT. Very similar to Oracles lock management.

See http://www.databasejournal.com/features/mssql/snapshot-isolation-level-in-sql-server-what-why-and-how-part-1.html

If your code is not too complicated, you can probably implement this without any changes. Bare in mind that some visibility may be affected (ie Dirty reads may not give dirty data)

I find this blanket system easier and more precise than using query hints all over the place.

Configure the database using:

SET ALLOW_SNAPSHOT_ISOLATION ON

Then use this to prefix your transaction statements:

SET TRANSACTION ISOLATION LEVEL SNAPSHOT
Grantly
  • 2,546
  • 2
  • 21
  • 31
  • I have gone through the link on Snapshot isolation that you provided. I also referred to this [MSDN link](http://msdn.microsoft.com/en-us/library/tcbchxcb.aspx). It states Snapshot isolation uses optimistic concurrency model. If a transaction tries to commit changes to data that has been modified since the transaction began it will be rolled back and an error will be raised. It suggests using UPDLOCK hints for SELECT statements.In my project both applications may try to select and update the same row. As per my understanding with snapshot isolation I may run into errors in such a scenario. – M Singh Dec 11 '14 at 16:09
  • Then you simply manage the error, and re send the transactions. I'm sure you will encounter this problem far less than your current issues – Grantly Dec 11 '14 at 17:47
  • I see it is possible to handle the error and resend the transaction. But I was hoping for a solution to avoid the extra error handling code. I would like to highlight another point. My business logic involves selecting a row from "accounts" table, based on the balance do some business process, followed by deducting the balance and updating the same row. If more than 1 request is received for the same row the balance could go negative. How can this be prevented? Please note the business process after the initial balance check cannot be rolled back. – M Singh Dec 19 '14 at 20:57
  • You will be surprised how well the Snapshot Isolation works...Almost as good as Oracle. You probably can't push SQL Server much further, unless you want to manage the errors / rollbacks all the time. Your second point probably represents a separate question...But if the process is wholly contained in a transaction - that is the best solution (Then the second request will either wait or fail) – Grantly Dec 19 '14 at 21:32
  • I would like to point out that the purpose of my question is find a way to ensure that the balance can be updated by only one request (from either application) at a time and ensure balance does not become negative. I have updated my original question to make it clearer. A request starts with selection of balance based on which a business process is performed. As mentioned in my prev comment, it is not possible roll back the intermediate business process in case of a failure. I still do not understand how snapshot isolation can help in my situation. Could you please review my question again. – M Singh Dec 22 '14 at 18:30
  • @MSingh SQL actually does put locks on the rows (behind the scenes...these are not the formal locks you would use in a HINT), and also uses 'FIFO locks' type system to ensure the rows are free to update. I'll see if I can find an article for you and will post the link if I can find the relevant papers.... Have you experimented with the SNAPSHOT isolation yet? – Grantly Dec 23 '14 at 17:36