0

So, I'm having a bit of an issue. The case is that we have this legacy application exposing a web service and takes in data sent to it from a mobile app.

As of recently we're beginning to get reports from users, that the process of handing in data from the mobile app is taking a very long time submitting the results.

When inspecting the web services log, we're seeing a lot exceptions, pointing to different methods in the code. Common for all these methods are, that they are write operations to the database. All of the methods are part of a larger SQL transaction which is opened by the application.

The application does a lot of checks and lookups when processing an incomming result before beginning to write to the database. Everything is done through Entity Framework and is wrapped inside a transaction.

It seems that these transactions lock the database, and block other incoming results from being handled which causes the attempt to hand in a result from the app to timeout and continuously try again until it's finally let through the gates. That's just our theory.

The transaction is just writing new data to the database - it's not modifying any existing data. The only reason for wrapping it in a transaction seem to be to be able to rollback the transaction if it fails - not to avoid any other write operations to the table while it works.

The main issue, is that it's a legacy application and we currently don't have access to anyone who worked on the application back in the days. So doing a rewrite would be very time consuming and error prone. We're in the process of phasing out this particular question so if only we can buy ourselves some time, that would be ideal.

If we are right in our assumptions, that there should be no issue in having two parallel write operations running, is there an easy way of allowing multiple of these transactions running simultaneously without having to do a major rewrite of the application?

  • Do not open a transaction and do everything in that transaction. You should only do actual insert/update/delete in transactions. You might be keeping the transaction open too long and thus have much change of blocking reading transactions which causes your problem. – GuidoG Jan 16 '17 at 13:52
  • Hi @GuidoG, we've realized that. The problem doesn't seem to be, that we're blocking other read operations. It's that the transaction are blocking any other write operations (other instances of itself). However, as noted in the question there shouldn't theoretically not be an issue in having multiple of them running at the same time. Can it somehow be accomplished? – Christian A. Rasmussen Jan 16 '17 at 13:54
  • How are you sure you aren not blocking read operations and that you are blocking write operations only ? Other instances if itself as you call it ? How did you manage to know that for sure. Also, are there other applications doing read/updates on this database ? They could also be blocking. – GuidoG Jan 16 '17 at 14:56
  • @GuidoG, I'm quite sure we are blocking both read and write operations. However, it's primarily the write operations which are a problem since they seem to be causing other incoming the mobile apps to timeout due to deadlock. What we've done so far, is analyze the code and have concluded that, all in all, there shouldn't be a problem in having two of those write operations running in parallel. They are just writing new data to the database - they are completely independent. But, as you insinuate, we are not entirely sure. However, if it is possible, we would like tot ry it out in test. – Christian A. Rasmussen Jan 16 '17 at 15:23
  • I would start with [BrentOzarULTD's SQL Server First Responder Kit](https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit/tree/master) to see what's off with your database. Continue with [Capturing Deadlock Information](https://www.brentozar.com/archive/2014/06/capturing-deadlock-information/) – SqlZim Jan 16 '17 at 16:20
  • you might want to read this: https://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/ – GuidoG Jan 16 '17 at 16:24
  • Thank you to you both, @GuidoG, I'll have a look at it. – Christian A. Rasmussen Jan 16 '17 at 16:29
  • Thank you, @SqlZim, I'll give it a look :) – Christian A. Rasmussen Jan 16 '17 at 16:29

2 Answers2

0

First thing I'd suggest it to try and get a handle on the locks that are being generated. start grabbing sys.dm_tran_locks etc.

Do you have a lot of FKs, Indexes etc on your DB - You could consider disabling these if the app can be trusted.

Or it might just be easiest to to some archiving - can you remove historical data from the live tables? the smaller these are the better.

Worst case - you could create multiple DBs (eg 1 per app pool) and manually merge the data into a master.

john McTighe
  • 1,181
  • 6
  • 8
  • Thanks for your comment John. To answer your questions: Yes, it has quite a lot of FKs. We don't trust the app enough to disable these. The data are used for reporting and sadly we have use for the entire dataset, so we can't really archive anything. That might be a solution but if we get that far, I think a rewrite of the procedure which takes in the result and processes it for insertion would be easier. As mentioned, we're phasing out the application so we just need it to run for a few more months. Im hoping we can find a more practical solution to allow for parallel transactions – Christian A. Rasmussen Jan 16 '17 at 15:37
  • Never ever ever disable FK's. Its the only thing in this world that is able to make sure your database does not gets corrupted. – GuidoG Jan 16 '17 at 16:22
0

Your problem is how the client is using the db. From what your telling me the client is locking the tables and preventing other things from accessing them.

Normally I would suggest using something like a "with (NOLOCK)" hint (depending if your app can handle a dirty-read), but since your using Entity Framework I will suggest looking at this instead: Get Entity Framework 6 use NOLOCK in its underneath SELECT statements

Isolation level on your db will play a large role. Isolation Levels

I would suggest you check how the client is interacting with the db. The operation that takes a long time, is that a select, insert, update, upsert? Did you profile it and see what the problem is? (explain plan etc...)

Is there an index on the table. Is the query using the index?

If the problem is an insert/update (upsert) issue Entity Framework handles that awfully. I suggest you modify that code to something like this Upsert fast in SQL Server

Entity framework is convenient but not good on performance when handling large datasets.

Community
  • 1
  • 1
Kelly
  • 6,992
  • 12
  • 59
  • 76