0

I'm trying to update a row in a table upon someone viewing the page (it increments the viewed count), however now and then I get a deadlock error, I'm guessing this is due to two or more people trying to update the same row?

The error is:

Transaction (Process ID 60) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

And my SQL is:

UPDATE [ProductDescription] 
SET [ViewCount] = ([ViewCount] + 1) 
WHERE ProductCode = @prodCode 
    AND ApplicationID = @AppID

I believe I may need a WITH(NOLOCK)?

Pete Carter
  • 2,691
  • 3
  • 23
  • 34
dhardy
  • 993
  • 2
  • 11
  • 22
  • You should start with [Analyzing Deadlocks with SQL Server Profiler](http://msdn.microsoft.com/en-us/library/ms188246(v=sql.100).aspx) – Mikael Eriksson Nov 02 '12 at 10:48
  • Don't use `WITH (NOLOCK)`, you should consider reviewing your transaction isolation level and indexes. Proper indexes can reduce contention. – ta.speot.is Nov 02 '12 at 10:49
  • Thanks, I have added an index, that should hopefully dramatically reduce the errors! – dhardy Nov 02 '12 at 11:38

3 Answers3

0

You DO NOT need NOLOCK. This will only remove read locks and will cause unpredictable results. The better thing to do would be to use TABLOCK on the update statement, meaning that other processes cannot access the table until you have finished.

Pete Carter
  • 2,691
  • 3
  • 23
  • 34
  • I agree that `WITH (NOLOCK)` is bad but surely `ROWLOCK` does not make it such that *other processes cannot access the **table** until you have finished.* `TABLOCKX` would achieve this, no? – ta.speot.is Nov 02 '12 at 10:51
  • "WITH (NOLOCK) is bad". Really? Take a look at the some of the built in system SPs they are used everywhere. Consider also that this is equivalent to READ UNCOMMITTED. It is just a different level of granularity - nothing more. – Robbie Dee Nov 07 '12 at 18:37
  • DBA tomes, journals and white papers are littered with questionable wisdom like: never use dirty reads, indexes are always a good thing, full table scans are bad, always normalise. Nonsense, nonsense, nonsense. It is about understanding both the efficacy and the risk. Databases rarely exist in a vacuum - they are there more often than not to serve an application that HAS TO perform (especially on the internet) for the users. Unfortunately for those DBAs in their ivory towers this does sometimes involve using dirty reads, denormalised data and full table scans... – Robbie Dee Nov 07 '12 at 19:24
  • The corollary of your statement is: what is the point in having correct data if the application crashes or is unresponsive? Consider the vanilla case where an SP rolls back a transaction due to some anomaly. On a database (say Stackoverflow's) where the data is rapidly changing, the very rare case of a dirty read is unlikely to cause significant issues and less likely still to even be noticed. However, for financial institutions, correct data is must and you design accordingly. Understand both the efficacy and the risk. – Robbie Dee Nov 07 '12 at 20:41
  • @RobbieDee But the application will also crash if the no lock causes a 601 error because the pages have moved. I really do think however that we should call this conversation to a halt as it is not constructive and SO is not a debating forum? I am going to mark all our comments on here for moderator review I have already deleted my own – Pete Carter Nov 07 '12 at 20:59
  • That would rather depend on how well the code was written. From SP_WHO2: ...from master.dbo.sysprocesses with (nolock) if @@error <> 0 ... Since SQL Server 2005, it has been very easy to trap these sorts of exceptions and retry if required. – Robbie Dee Nov 07 '12 at 22:20
-1

SET transaction isolation level to SERIALIZABLE or SNAPSHOT to update data properly.For more details check HERE

AnandPhadke
  • 13,160
  • 5
  • 26
  • 33
  • Seriazizable is likely to cause further deadlocks, as the lock will be held to the end of the transaction – Pete Carter Nov 02 '12 at 10:49
  • See each update will be in transaction and other updates will wait to finish the first one.How will it lead to further lock can you explain? – AnandPhadke Nov 02 '12 at 11:02
  • If the transaction accesses another table after and other transaction accesses different table first – Pete Carter Nov 02 '12 at 11:09
  • in that case same wil lapply for your TABLOCK also – AnandPhadke Nov 02 '12 at 11:38
  • No. It would only apply to TABLOCK if you also specified HOLDLOCK. TABLOCK on its own will release after the statement, not at the end of the transaction, like serializable would – Pete Carter Nov 02 '12 at 18:23
-1

The problem is more likely to be caused by users running selects at the same time. The default isolation level is "read committed" which causes locks.

Unless it is critical that the data you're reading is up to date, consider using:

with(nolock)

in the selects or an alternative isolation level.

Robbie Dee
  • 1,939
  • 16
  • 43
  • Unless you are performing an action such as populating a drop down list, NOLOCK is lost always a bad idea. Remember, it is not just the fact you can read dirty data, you can also get duplicate results, etc. because of pages shuffling – Pete Carter Nov 02 '12 at 10:50
  • There are very few absolutes in DB programming and WITH (NOLOCK) is one of them. You need to consider carefully about whether you care about stale data. Most of the time you don't. See here: http://stackoverflow.com/questions/1452996/is-the-nolock-sql-server-hint-bad-practice – Robbie Dee Nov 02 '12 at 11:16
  • I certainly don't advise throwing them round like confetti. If you have many tables that are constantly being updated and read, you need to look at isolation levels. – Robbie Dee Nov 02 '12 at 11:18