0

I have this piece of code that I am using for updating a record. I want to know how to use SemaphoreSlim if there are multiple await statement in one block of code.

Whether deadlock and recursive locks can occur for the below piece of code, and when it occurs,how to avoid Deadlock and recursive locks when we are using SemaphoreSlim?

SemaphoreSlim writelock = new SemaphoreSlim(1,1);

Public async task AddValueToDB(Valuedto value)
    {
         try{
                    await writelock.WaitAsync();
                    _db.tblAction.add(Valuedto);
                    await _db.SavechangesAsync();  
            }
           catch(Exception ex){throw ex;}
           finally{writelock.Release();}
     }
}

Please let me know if you want any more information about my requirement.

Mihail Stancescu
  • 4,088
  • 1
  • 16
  • 21
  • 2
    Well you **shouldn't** be sharing Entity Framework contexts **between threads** so this question is perhaps moot. https://stackoverflow.com/questions/52027908/how-to-work-with-entity-framework-in-different-threads –  Nov 23 '18 at 04:46
  • 2
    ...by the way you can delete the `catch(Exception ex){throw ex;}` line. It's perfectly fine just to do a `try-finally`. There's no `catch` to it (pun intended) –  Nov 23 '18 at 04:51
  • 2
    And `throw ex` is not a rethrow. It's a new throw of a new exception. https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/throw – Paulo Morgado Nov 23 '18 at 07:34
  • When? Never. It's never needed but can *easily* cause trouble by holding database locks for longer, increasing blocking and possibly causing deadlocks. Why do you think you need any kind of locking in the first place? What are you trying to do? – Panagiotis Kanavos Nov 23 '18 at 07:44
  • If you want to insert records fast, use `SqlBulkCopy` to insert data using the bulk copy mechanism of SQL Server with minimal locking and logging. Or just make all `Add` calls and call `SaveChangesAsync()` **only once** at the end. EF will generate a batch of statements and execute them all at once. – Panagiotis Kanavos Nov 23 '18 at 07:46
  • If you encounter slow DB performance, *fix* it, don't try to cover it up by trying to execute statements in parallel. It won't improve anything - after all, all threads talk over the *same* network, to the *same* server that uses the *same* disks for IO. It can easily *harm* performance though as multiple connections try to access those same resources, taking locks that block each other. In case of INSERTs those are Exclusive (X) locks. – Panagiotis Kanavos Nov 23 '18 at 07:49
  • @Panagiotis Kanavos Actually I want to ensure multiple concurrent request does not do wrong update of the resource. I dont want to do a bulk copy. I am updating only one record in table. If I dont place any lock will it be a issue in this case. For async await statements I found out that we need to use SemaphoreSlim which awaits in a async way so no deadlock occurs. – Nilanjan Sen Nov 23 '18 at 09:50
  • @NilanjanSen you do that by writing a proper SQL statement,not locks on a *single* client. What happens when *two* clients try to update? Databases typically handle thousands of concurrent clients/users. BTW why would you have deadlocks with `INSERT` statements? Unless you used a long-running transaction instead of using optimistic locking ? Did you try to "lock" records by reading them inside a transaction and then INSERT inside that transaction? – Panagiotis Kanavos Nov 23 '18 at 09:52
  • @NilanjanSen concurrency in databases is handled in one of two ways - pessimistic concurrency uses transctions and locks. This doesn't scale so well. *Optimistic* concurrency uses `rowversion` or timestamp fields to detect whether a record was modified since it was last read. It scales orders of magnitude better than transactions. INSERTs don't need either. `SaveChangesAsync` uses a transaction internally to ensure all changes are stored in the database atomically – Panagiotis Kanavos Nov 23 '18 at 09:57
  • @NilanjanSen in short, if you have problems it's because of code you haven't posted here. Trying to lock a single client or even a single request won't fix them. – Panagiotis Kanavos Nov 23 '18 at 09:59
  • @Panagiotis Kanavos Thank you for the explanation.I really appreciate it. Actually In that above code I am updating a record through a sp. By usimg SemaphoreSlim(1,1). I am ensuring only 1 thread is allowed to do the transaction and also the wait is asynchronous, which occurance of deadlock is minimal.I am trying to find a solution in this async await scenario. If we use row versioning will it not add a addition overhead. – Nilanjan Sen Nov 23 '18 at 10:08
  • @NilanjanSen databases work with thousands of concurrent clients for over thirty years. They don't need client-side locks. You don't need a transaction in the first place, `SaveChangesAsync` uses a transaction internally. You wouldn't need to worry about concurrent access to the *context*, not the *transaction* if you didn't use a global context in the first place. – Panagiotis Kanavos Nov 23 '18 at 10:40
  • @NilanjanSen *all* EF tutorials show that the DbContext is either created only when needed in a `using` block, or that it's injected into a *container* and disposed as soon as a request is finished. That's because you *don't* need a global context, in fact it's a serious bug. Creating a new one doesn't cost anything. Opening a new connection doesn't cost anything either - connection pooling means that disposed connections are reset and placed on a client-side pool, ready to be reused. – Panagiotis Kanavos Nov 23 '18 at 10:43
  • @NilanjanSen your code should be `public async Task AddCustomer(Customer customer){ using(var ctx=new MyContext()){ ctx.Customers.Add(customer);await ctx.SaveChangesAsync();}}`. That's thread-safe and atomic. No global resource to protect – Panagiotis Kanavos Nov 23 '18 at 10:46

0 Answers0