4

We've been using a mechanism (a class called SqlDeadlockHelper) for quite a while now, and it has helped us quite a bit when trying failed database calls due to a deadlock. SqlDeadlockHelper will catch a SqlException, recognize that it's a deadlock, and try again. The second attempt almost always succeeds.

Is it safe to do something like this for command and/or connection timeouts? I mean, it's not possible for work to complete on SQL Server, only to timeout before the data gets back to the caller, is it?

Edit:

Transactions have been mentioned as a way to treat calls as a unit of work. That way it can succeed or rollback entirely. But what about a single ADO.NET call that only does one thing. Is it necessary to wrap that in a transaction?

Bob Horn
  • 33,387
  • 34
  • 113
  • 219

2 Answers2

1

Depending on your unit of work it is possible for SQL to complete part of the work before it deadlocks and throws an error. The way you handle units of work is with transactions. Most SQL databases support transactions. You need to wrap the units of work in a Begin, Commit, and Rollback transaction.

http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqltransaction.aspx

  • So this should apply to timeouts as well? The key is to use a transaction. If the transaction fails, it's safe to try again. What if it's not in a transaction, and it's just a straight ADO.NET call? Can that be retried? – Bob Horn Aug 06 '13 at 12:51
  • That's correct it applies to all SQL commands. If the commands are not in a transaction then you are not guaranteed a "single" unit of work, which could result in partial data submission. Transactions are an SQL concept and were created to handle the scenario you are running into as well as others. Here's an example of using an ADO transaction. http://msdn.microsoft.com/en-us/library/aa227162(v=VS.60).aspx – John C. Lieurance Aug 06 '13 at 12:58
0

From TechNet Library

After a deadlock is detected, the Database Engine ends a deadlock by choosing one of the threads as a deadlock victim. The Database Engine terminates the current batch being executed for the thread, rolls back the transaction of the deadlock victim, and returns a 1205 error to the application. Rolling back the transaction for the deadlock victim releases all locks held by the transaction. This allows the transactions of the other threads to become unblocked and continue. The 1205 deadlock victim error records information about the threads and resources involved in a deadlock in the error log.

So, your transaction is safely rolled back and you may try it again.

EDIT

Being a lazy a**e, I haven't read your question thoroughly. Let's see:

Is it safe to do something like this for command and/or connection timeouts?

I wouldn't do the same kind of generalization, because you could end up trying to read data from database that isn't accessible at all. You should have a maximum numbers of retries.

I mean, it's not possible for work to complete on SQL Server, only to timeout before the data gets back to the caller, is it?

Never heard of that, and it shouldn't be possible if you use transactions. And you should be using them, along with XACT_ABORT, the flag that Specifies whether SQL Server automatically rolls back the current transaction when a Transact-SQL statement raises a run-time error. However, it is possible for work to neither complete not not-complete - see zombie transaction.

I just found this as well.

Community
  • 1
  • 1
OzrenTkalcecKrznaric
  • 5,535
  • 4
  • 34
  • 57
  • That's a deadlock. I'm asking about a timeout. – Bob Horn Aug 06 '13 at 12:49
  • Thanks. See my edit. I'm asking because we have an existing app with a lot of single ADO.NET calls all over the place. I need to know if it's necessary to wrap all of those in a transaction. – Bob Horn Aug 06 '13 at 13:24
  • No, if your code is a single TSQL statement, you don't have to wrap it with transaction, it is autocomitted by default. – OzrenTkalcecKrznaric Aug 06 '13 at 14:14