I have MS SQL server with a database containing several tables. Each table has a field 'SN'. Several clients can work with this database, but every client operates only with records with its own value of SN.
One type of client app is a legacy software written in python using pyodbc. As I understand, that software can create long transactions - it can execute UPDATE or INSERT statements and commit them only several hours later. I know, that it is wrong to do that, but it is not desirable to modify that software.
Transaction isolation level is set to READ_COMMITED with READ_COMMITTED_SNAPSHOT set to ON.
Everything work ok except the following scenario:
- First, legacy clent starts transaction which UPDATEs records whith its SN value, say SN = 1
- Another client app tries to UPDATE records with its SN value, say SN = 2
This situation results in LCK_M_U lock for second client: ridlock field = 1 pageid = 311 dbid=5 id=lock2776cf380 mode=X associatedObjectId=...many numbers here...
Although those two clients use different SN values this results in a lock. I suspect that this happens because of SQL sever locks not particular rows, but a page (a collection of rows, I guess).
According to SO question (Is it possible to force row level locking in SQL Server?) row locks are not guaranteed because of lock escalation.
I considered setting transaction isolation level to 'READ UNCOMMITTED', but I'm not sure it is the right way.
Is there any way to properly configure MS SQL server to resolve this lock and allow 'simultaneous' updates of different rows?