0

My application runs a nightly purge process to delete old records from the primary tables in my OLTP application. I was experiencing lock escalation during the purge process which was blocking concurrent inserts into the table, so I modified the purge procedure to loop through and delete records in blocks of 4900 which should be well below SQL Server's lock escalation threshold of 5000. While lock escalation was much reduced, SQL Server Profiler still reports occasional lock escalation on the following DELETE statement in the loop:

-- outer loop increments @BatchMinId and @BatchMaxId variables
  BEGIN TRAN

  -- limit is set at 4900
  DELETE TOP (@limit) h
  OUTPUT DELETED.ChildTable1Id,
         DELETED.ChildTable2Id,
         DELETED.ChildTable3Id,
         DELETED.ChildTable4Id
  INTO #ChildRecordsToDelete
  FROM MainTable h WITH (ROWLOCK)
  WHERE h.Id >= @BatchMinId AND h.Id <= @BatchMaxId AND h.Id < @MaxId AND 
        NOT EXISTS (SELECT 1 FROM OtherTable ot WHERE ot.Id = h.Id);

  -- delete from ChildTables 1-4 (no additional references to MainTable)
  COMMIT TRAN;
-- end loop

The "IntegerData2" column in SQL Server Profiler for the reported lock escalation events (which is supposed to be the escalated lock count) ranges from 10197 to 10222 which does not look close to any multiple of 4900 (my purge batch size) plus any multiple of 1250 (number of additional locks SQL Server may take before attempting escalation).

Given that I am explicitly limiting the DELETE statement to 4900 rows, how are more locks ever being taken, especially to the point that SQL Server is escalating to a table lock? I would like to understand this before I resort to disabling lock escalation altogether on this table.

Dan Hermann
  • 1,107
  • 1
  • 13
  • 27
  • 1
    I would not disable lock escalation - it can lead to other issues. For a very clear discussion on disabling lock escalation please see http://www.sqlpassion.at/archive/2014/02/25/lock-escalations/ How about purging in batch sizes of 1000? In my experience you won't see major speed differences. – benjamin moskovits Mar 11 '15 at 14:58
  • 1
    I think despite your best efforts, SQL Server may decide to escalate locks when it choses to do so. Is there are timeliness requirement for the deletion though? Logically, the smaller the batch size, the less likely you will get lock escalation. Could it be reduced still further? Also, could the query be tuned in anyway - this may reduce the likelyhood of lock escalation. – Dan Saunders Mar 11 '15 at 15:09
  • 1
    I'm not really an expert on this, but if you have non clustered indexes, I would assume those need also to be locked and deleted + maybe also some pages in clustered index too. Could that explain this? – James Z Mar 11 '15 at 15:19
  • @benjaminmoskovits I will disable lock escalation only as a last resort. I can try reducing the batch size, but even if that were to work, I would like to understand why 100 less than the lock escalation threshold was not good enough, but 4000 less than the lock escalation threshold is good enough. – Dan Hermann Mar 11 '15 at 19:19
  • @DanSaunders So far as I understand, SQL Server may defer lock escalation beyond 5000 locks per object, but it may not arbitrarily escalate at fewer than 5000 locks. – Dan Hermann Mar 11 '15 at 19:20
  • @JamesZ Good point about non-clustered indexes. There are some on that table, but so far as I can tell from the profiler trace, the escalation is occurring on the table itself. – Dan Hermann Mar 11 '15 at 19:22
  • The basic rule (which is somewhat flexible) is that at a certain point SQL Server decides, when there are more then 5000 or so locks on a single table in a single session, to escalate to a table lock. There are all kinds of locks occur, not just an exclusive (x) lock per row. To see all the locks that are being taken you may try: begin transaction do your delete here; select * from sys.dm_tran_locks – benjamin moskovits Mar 12 '15 at 00:57
  • To see an excellent explanation of SQL Server locking I would suggest you read any version of Kalen Delaney's SQL Server Internals book. – benjamin moskovits Mar 12 '15 at 01:08
  • @Dan: Threshold for lock escalation is not 5000 rows. its just *one of the* parameters used by database engine while escalating lock. Lock escalation *for practical purpose* is managed by database engine. Read this to know that lock escalation is not exactly for 5000 rows http://social.technet.microsoft.com/wiki/contents/articles/19870.sql-server-understanding-lock-escalation.aspx – Shanky Mar 12 '15 at 07:47

1 Answers1

0

I can't comment on your question since I don't have enough reputation on this web site, so I'm commenting here.

I had a similar issue with a cleanup task running at night. The delete statement was locked by the "GHOST CLEANUP". Here have a look at this : SQL Server Lock Timeout Exceeded Deleting Records in a Loop

Hope this help. One weird solution that I found at the time was : 1) Insert the record to keep in another table with same structure. (Copy) 2) Truncate table to clean 3) Insert back data to keep from the copy into the now empty table. 4) Truncate copy table to release space.

This trick was faster to cleanup then the delete itself, because the deletion was done in a split second because of truncate. Somehow the cost of insertion was less expensive then deletion one.

But still, I would recommend to avoid this solution. You could also reduce the chunk between 100 to 500. This increase time the cleanup takes, but you are less likely to have the lock escalation.

Community
  • 1
  • 1
AXMIM
  • 2,424
  • 1
  • 20
  • 38