0

I have a c# program working with a SQL Server database running on different client PCs. Now, I am getting this exception which [if not catched] closes my application:

Transaction (Process ID ...) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

and the stack trace shows that the exception happens here:

at Gui.DB.sqlServerWrapper.MarkAsNonFreshSample(String barcode, Int32 devID)

Where the mentioned method is this:

        public void MarkAsNonFreshSample(string barcode, int devID)
        {
            mux.WaitOne();
            var sql = "DELETE FROM results WHERE barcode=@barcode AND devID=@devID";
            var command = new SqlCommand(sql, conn);
            command.Parameters.AddWithValue("barcode", barcode);
            command.Parameters.AddWithValue("devID", devID);
            command.ExecuteNonQuery();
            mux.ReleaseMutex();
        }

I am using the "mux" mutex to exclusive access to the sql connection "conn" because my program has multiple threads.

I am not using a complex query or a long transaction (and I guess this short query is one transaction which should not cause a deadlock. am I wrong?).

Where is the problem and how I should fix it?

SqlZim
  • 37,248
  • 6
  • 41
  • 59
Ali Khalili
  • 1,504
  • 3
  • 18
  • 19
  • 2
    *Remove* the mutex and the global connection. Global connections guarantee deadlocks and are simply impossible to recover in case of error. Connections should remain open for as little time as possible – Panagiotis Kanavos Jan 02 '17 at 17:02
  • Can you retrieve [the deadlock graph](http://dba.stackexchange.com/questions/10644/deadlock-error-isnt-returning-the-deadlock-sql/10646#10646) and include it in your question? – Martin Smith Jan 02 '17 at 17:38
  • While sharing the connection itself is not necessarily the actual cause of the error, it is possible that earlier uses of the connection may have started a transaction and performed activities without committing or rolling back. If multiple clients do this and then try to delete rows later, the different clients may be deadlocking with each other as a result. Please show us **all code** which uses this shared connection—this code alone by itself is less likely to encounter such a deadlock error by itself. – binki Aug 06 '19 at 19:16

1 Answers1

2

Sharing a connection like that is not a good practice

public void MarkAsNonFreshSample(string barcode, int devID)
{
    using (SqlConnection con = new SqlConnection(conString))
    {
        con.Open();
        using (SqlCommand command = con.CreateCommand())
        {
            command.CommandText = "DELETE FROM results WHERE barcode=@barcode AND devID=@devID";
            command.Parameters.AddWithValue("barcode", barcode);
            command.Parameters.AddWithValue("devID", devID);
            command.ExecuteNonQuery();
        }
    } 
}
paparazzo
  • 44,497
  • 23
  • 105
  • 176
  • you are definitely right. But (1) is this the problem? (2) I am using this approach because If my threads interact with database some hundred times a second and if I open and close the connection, it would probably cost me more – Ali Khalili Jan 02 '17 at 16:53
  • 1
    @Akhir using a global long running connection *and* a mutex is going to cause big problems no matter what. Just the long running connection is enough to accumulate locks and cause deadlocks. Anything you read with that connection acquires a Select lock, which means that the first time someone else tries to do an update you'll get a deadlock. – Panagiotis Kanavos Jan 02 '17 at 17:03
  • 2
    @Akhir Second, what you are doing is what costs 1000 times more, because long connections cause blocking and deadlocks. ADO.NET uses connection pooling so ti *doesn't* need to create a new server connection each time. Closed connections are reset and placed back in the pool, ready for reuse. ADO before it did the same, since 1995 at least. The difference in performance and scalabitlity is *orders* of magnitude in favor of pooling, because 5-6 connections can end up service hundreds of requests. – Panagiotis Kanavos Jan 02 '17 at 17:05
  • 1
    Finally, calling blocking methods like `ExecuteNonQuery` in separate threads is fake multithreading. You just block the threads waiting for the database to respond. You should use the asynchronous methods like `ExecuteNonQueryAsync` which *don't* block any threads. The use the operating system's IO completion port features to wait for responses without using threads. You can server hundreds of requests with only a few ThreadPool threads – Panagiotis Kanavos Jan 02 '17 at 17:09
  • 2
    Use `await conn.OpenAsync()`, too. It also makes a significant difference - see: https://stackoverflow.com/questions/41399766/why-is-the-throughput-of-this-c-sharp-data-processing-app-so-much-lower-than-the/41402158#41402158 – Clay Jan 02 '17 at 20:45
  • 2
    This is not an answer to the question; as a persistent connection is irrelevant to the problem. – Ian Boyd Oct 24 '18 at 19:51