2

I have a table called Products with many columns. It is a temporary table used for reporting purposes. The data will be processed to this table concurrently by multiple user requests. There are separate stored procedures to make DML operations to this table.

Table Structure:

CREATE TABLE Products (
  instance uniqueidentifier,
  inserted datetime,
  col1, 
  col2,
  ...
)

The inserted column will be populated with GETDATE() to contain the time when each row was inserted and the instance column will contain the value from newid(). One user request will have one unique id but may have million rows. The below are the queries which will be executed concurrently, which causing the deadlock. Please advise me

Query 1:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED 
DELETE P 
FROM Products (NOLOCK) 
WHERE instance = 'XXXX-xxx-xxx-xx'

Query 2:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED 
DELETE P 
FROM Products (NOLOCK) 
WHERE inserted <= DATEADD(hh, -10, GETDATE())

Note: The nonclustered index is created on instance column.

Please advice me which lock I can use in this scenario.

Note I couldnt able to primary key as it is consuming time when I insert 10 million rows to the table (this for one transaction; there are 20 concurrent transations). The report should be generated sooner. And my procedure has multiple 35 DML statments, there are around 15 DELETE statements for instance column with other columns( DELETE FROM table WHERE instance = @instance AND col1 = @col1).

krismath
  • 1,879
  • 2
  • 23
  • 41
VASU
  • 21
  • 1
  • 1
  • 2
  • Take a look at this http://stackoverflow.com/questions/9952137/multi-threading-c-sharp-application-with-sql-server-database-calls/10035988#10035988 – Phil Feb 25 '13 at 18:41

2 Answers2

7

(1) You should stop using read uncommitted isolation. Use at least read committed.

(2) There are a number of things you could try to avoid deadlocks, like ensuring your different transactions access database objects in the same order, etc. This would be worth a read - http://support.microsoft.com/kb/169960

(3) Disable lock escalation for your table (more granular locks so better concurrency, but more lock overhead):

ALTER TABLE Products SET (lock_escalation = disable)

(4) Disallow Page Locks, and allow Row Locks on your indexes (will mean you can't defrag indexes, but you can still rebuild them):

ALTER INDEX [<YourIndex>] ON Product WITH (allow_row_locks = on, allow_page_locks = off)
Pylyp Lebediev
  • 1,991
  • 4
  • 26
  • 44
muhmud
  • 4,474
  • 2
  • 15
  • 22
1

First, there's no lock that you can take on these delete statements besides an exclusive lock. Your isolation level and NOLOCK hints are being ignored by Sql Server:

(Nolock) Only applies to the SELECT statement.

Two suggestions:

Change your non-clustered index on instance to a clustered index. BUT, only do this if you can change NEWID() to NEWSEQUENTIALID().

Second, instead of performing a delete to remove records older than 10 hours... consider implementing rolling partitions. This will remove any contention caused by the cleanup with your other delete operations.

Michael Fredrickson
  • 36,839
  • 5
  • 92
  • 109