3

This is an extension to Is it possible to force row level locking in SQL Server?. Here is the use case

I have accounts table having account numbers, balances, etc. This table is being used by many applications. It is quite possible that while I am modifying an account, someone else is modifying another account. So the expected behavior is that I would lock my account(ROW) and the other use will lock his(another ROW).

But SQL Server 2008 R2 escalates this locking to page/table and the second user gets timeout exception. I have tried all the solutions mentioned in the referenced question but nothing is working.

How can I force SQL Server to lock a row-level lock only OR how can I modify this model in a way that it will work with page/table locking?

EDIT The update is targeting a single record via its PK and it is indexed so only ONE ROW is being updated/locked and the process takes not more than a minute

Edit Now it looks something weird is happening. I am using an ORM library for DAL which is opening more than one connections and i have raised the question to their support. But, for testing purpose, i opened two sessions on query tool and did following

Session # 1
begin tran
UPDATE myTable SET COL_1 = COL_1 WHERE COL_1 = 101;

Session # 2
SELECT COL_1 FROM myTable WHERE COL_1 = 101;

The query in Session # 2 times out !!! Queries for other values of COL_1 are working fine. Now it looks SELECT is blocked for a session if the same record is in edit mode in another session.

Though Oracle does support selection of a row (with default params/no keywords) while it is being modified by other session, SQL Server does not (with default params/no keywords), so it seems the problem is with the library.

Community
  • 1
  • 1
bjan
  • 2,000
  • 7
  • 32
  • 64
  • How many total records do you expect this table can have? How many users do you have in your site? – Nilish May 18 '12 at 08:12
  • right now i am testing with two users and no. of records are 13000. My test has failed even on single update but i am going to lock 7 records. – bjan May 18 '12 at 09:33
  • 1
    If you were really locking just a **single row** with your one update, then a second update also targetting just a single row must work just fine. There must be something else happening in your SQL Server, or you might have a misconfiguration of sorts.... – marc_s May 18 '12 at 09:51
  • Re edit2: you have uncommited transaction in first session, of course second session times out. – Arvo May 18 '12 at 11:43
  • @Arvo It is deliberate to keep the record locked, second session is just selecting the record not modifying it so it should be allowed – bjan May 18 '12 at 11:55
  • 2
    No, it should not. SQL server doesn't know, do you want commit or rollback transaction and thereby it doesn't return any value for this record. Like I said somewhere, SQL locking (and transactions) are not meant for blocking users to enter any data - they are meant for retaining database consistency. – Arvo May 18 '12 at 11:59
  • @Arvo Oracle does support such behavior and i have just tested this using two sessions. Now it looks SQL Server does not support this behavior i.e. It blocks SELECT for the same record in a session if it is being modified by some other session – bjan May 18 '12 at 12:26

2 Answers2

9

SQL Server always uses row-level locking by default .... so what exactly is it that you need??

If you lock more than a certain amount of rows (roughly 5000), then SQL Server will do lock escalation (lock the table instead of more than 5000 rows individually) to optimize performance and optimize on resource usage - but that's a good thing! :-)

There are ways to turn this off entirely - but those are NOT recommended! since you're messing with a very fundamental mechanism inside SQL Server's storage engine.

See:

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
  • I am doing a dummy update to lock records and this dummy update is using primary key fields. When i try to update another record in another session it keeps waiting ... is it row level locking !!! – bjan May 18 '12 at 07:35
  • **If you lock more than a certain amount of rows (roughly 5000), then SQL Server will do lock escalation (lock the table instead of more than 5000 rows individually)** So once the job is done. Lock will be released. automatically or we have to do it manually? – Nilish May 18 '12 at 08:17
  • When a transaction completes (or is rolled back), SQL Server will release the locks involved – marc_s May 18 '12 at 08:18
  • In case I am doing an import from Excel to SQL Server and records in excel are more then 50,000. Is there any chance to insert single record in the same table by another user? – Nilish May 18 '12 at 08:36
  • @Nilish: you need to "batch" your import into chunks of less than 5'000 rows each - then you won't run into lock escalation issues – marc_s May 18 '12 at 09:32
  • solutions mentioned in the referred question to turn off lock escalation are not working – bjan May 18 '12 at 09:50
2

Imagine your system as client-server application, where client and server are connected by very slow line (snail mail for example) and users are modifying their records very long time (a week for example). Then think about, when you need to lock rows/data and when you actually allow changing rows/data and so on - apparently placing SQL server internal locks for days doesn't seem good idea anymore.

If you don't have situation, when two users need to change same record, then you don't need locking while changing data at all. You need locking only for very short moment, when records are changed in database - in other words while user commits changes. (This is optimistic locking scenario.) Of course if two users change same data, then latest changes will overwrite earlier ones.

If you absolutely require that two users should never modify same data (pessimistic locking), then probably most general way is to use some application-defined locks table OR specific field(s) in data table(s). When one user checks some record out (when starting editing or similar), then you need to check, is that record already in use (locked) and if not, then mark this record as locked. Of course you need some functionality to remove stale locks then.

Or use SQL server internal specific functions for such cases. Look here: sp_getapplock function in MSDN; this way you shouldn't worry about records kept locked forever etc.

Arvo
  • 10,349
  • 1
  • 31
  • 34
  • 1
    Again, the complete table will be locked in case you lock more than 5000 rows individually – Nilish May 18 '12 at 08:33
  • 2
    I suggested NOT to use SQL locking for this scenario, but create custom logic. We have done that for our financial app - users often start editing documents and go to lunch before hitting Ctrl+S :) – Arvo May 18 '12 at 09:51
  • 1
    I am locking just one row and it would be less than a minute, while other user is locking another row. Whether i take one minute or 10, other user should not be waiting as he is locking another record – bjan May 18 '12 at 09:54
  • 2
    Why you are locking rows at all? Usually locks are needed only while writing changes to database, that takes milliseconds (unless some slow trigger code is executed). If you want to block users from concurrent changing records, then SQL server internal locking is not meant for that. – Arvo May 18 '12 at 10:37
  • The software-defined approach to row locking is a good idea (i.e. just have your own lock field and associated logic) as long as every application that uses the database respects it, which isn't the case in the original question. – Alan B Dec 14 '16 at 17:04