Posting the question and one answer here. Maybe somebody has better answer...
It's possible to write code that triggers a deadlock even for a single user if the developer accidentally opens up a second connection to the database instead of reusing the an existing one (which might already have an open transaction). Certain O/RM and LINQ frameworks make this mistake easy.
Here is the scenario in pure SQL terms:
--Setup:
CREATE TABLE Vendors(VendorID int NOT NULL, VendorName nvarchar(100) NOT NULL, ExtraColumn int not null, CONSTRAINT PK_Vendors PRIMARY KEY CLUSTERED(VendorID))
GO
INSERT INTO Vendors(VendorID,VendorName,ExtraColumn) VALUES (1, 'Microsoft', 12345)
INSERT INTO Vendors(VendorID,VendorName,ExtraColumn) VALUES (2, 'Oracle', 12345)
--Connection 1:
BEGIN TRANSACTION
UPDATE Vendors SET ExtraColumn = 222 WHERE VendorID = 2
--Connection 2:
BEGIN TRANSACTION
SELECT VendorName FROM Vendors WHERE VendorID = 1
This code (and the c#/java/ORM/LINQ/whatever that generated it) might run happily in development/testing when the data is small, but suddenly deadlock when the data/memory profile changes in production and the lock escalates from row to page to table.
So how can I force lock escalation up to table level in my test environment so I can flush out any bugs like this (where the code opens up the 2nd connection)?
Lock escalation is completely controlled by the SQL Server database engine and it's not possible to predict when it will escalate from row lock to table lock. I won't rehash all the details which you can find here, but one in particular to note: "Lock escalation is triggered when...The number of locks in an instance of the Database Engine exceeds memory or configuration thresholds." This means that it could be completely unrelated to my carefully-crafted code and perfectly-selected indices.