11

Table schemas (SQL Server 2012)

Create Table InterestBuffer
(
    AccountNo CHAR(17) PRIMARY KEY,
    CalculatedInterest MONEY,
    ProvisionedInterest MONEY,
    AccomodatedInterest MONEY,
)

Create Table #tempInterestCalc
(
    AccountNo CHAR(17) PRIMARY KEY,
    CalculatedInterest MONEY
)

I am doing an upsert. Update rows those existed and insert others.

UPDATE A
SET A.CalculatedInterest = A.CalculatedInterest + B.CalculatedInterest
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo

INSERT INTO InterestBuffer
SELECT A.AccountNo, A.CalculatedInterest, 0, 0
FROM #tempInterestCalc A
LEFT JOIN InterestBuffer B ON A.AccountNo = B.AccountNo
WHERE B.AccountNo IS NULL

All is working fine. Problem occurs during concurrent executions. I am inserting data into #tempInterestCalc by joining other various tables including a left join with the InterestBuffer table and different set of data is inserted into #tempInterestCalc for each concurrent execution.

My problem is that sometimes executions become locked by another execution until I commit them in serial.

My question is as I am providing different set of data then it should not have any impact of row lock over other concurrent operation. Any suggestion will be appreciated.

UPDATE 1: I have used SP_LOCK for InterestBuffer table. It says IndId = 1, Type = KEY, Mode = X, Status = GRANT.

I think the update and insert blocks other transaction to make phantom reads.

UPDATE 2: Sorry! Previously I told that update is fine. But now I realized that first Transaction write is blocking second transactions write. In first transaction I run the update and insert. In second transaction, after I insert data in #tempInterestCalc table I just do as following and its just worked fine.

--INSERT DATA INTO #tempInterestCalc 

SELECT * FROM #tempInterestCalc 
RETURN

--UPDATE InterestBuffer

--INSERT InterestBuffer

UPDATE 3: I think my problem is to read data from InterestBuffer during update and insert into InterestBuffer.

UPDATE 4: My answer below is working sometimes if I REBUILD INDEX of BranchCode in InterestBuffer table. Is there any reason that batch insert/update make problem with index ???

UPDATE 5: I have read that if maximum rows of a page needs to be locked for batch update then SQL server may locked that page. Is there any way to see which row is containing by which page or which page is going to lock and release during execution??

UPDATE 6: I am providing my scenario.

CREATE TABLE [dbo].[Account](
        [AccountNo] [char](17) NOT NULL,
        [BranchCode] [char](4) NOT NULL,
     CONSTRAINT [PK_Account] PRIMARY KEY CLUSTERED 
    (
        [AccountNo] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]

CREATE TABLE [dbo].[InterestBuffer](
    [AccountNo] [char](17) NOT NULL,
    [BranchCode] [char](4) NOT NULL,
    [CalculatedInterest] [money] NOT NULL,
 CONSTRAINT [PK_Buffer] PRIMARY KEY CLUSTERED 
(
    [AccountNo] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

Query for Branch 0001:

BEGIN TRAN

Declare @BranchCode AS Char(4) = '0001'
Declare @CalculatedInterestNew MONEY = 10

CREATE TABLE #tempInterestCalc
(
    AccountNo Char(17),
    BranchCode Char(4),
    CalculatedInterestNew MONEY,
    CalculatedInterestOld MONEY
)

INSERT INTO #tempInterestCalc
SELECT A.AccountNo, A.BranchCode, ISNULL(B.CalculatedInterest, 0), B.CalculatedInterest
FROM Account A 
LEFT JOIN InterestBuffer B ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE A.BranchCode = @BranchCode

UPDATE A
SET A.CalculatedInterest = B.CalculatedInterestNew + @CalculatedInterestNew
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE A.BranchCode = @BranchCode

INSERT INTO InterestBuffer
SELECT A.AccountNo, A.BranchCode, A.CalculatedInterestNew + @CalculatedInterestNew
FROM #tempInterestCalc A
WHERE A.CalculatedInterestOld IS NULL

DROP TABLE #tempInterestCalc
--ROLLBACK
--COMMIT TRAN

For Branch 0002, 0003 just change the @BranchCode variable value to 0002 &0003 and Run them simultaneously. Branch One

Branch Two

Branch Three

Esty
  • 1,882
  • 3
  • 17
  • 36
  • try to use with no lock.... – koushik veldanda Sep 21 '15 at 10:30
  • Could u explain more.. How I am gonna use NO LOCK with write? – Esty Sep 21 '15 at 10:37
  • select * from table with (nolock) – koushik veldanda Sep 21 '15 at 10:40
  • Yeah I used that after inserting in temp table. But it just show 311 rows where I tried to insert 318 rows and still executing query. – Esty Sep 21 '15 at 10:45
  • you could also do that in a transaction. – ProblemSolver Sep 21 '15 at 11:18
  • I actually run the query in a window starting with BEGIN TRAN and then run another with BEGIN TRAN and different set of data. Until I COMMIT TRAN/ROLLBACK the First; Second remain awaiting. – Esty Sep 21 '15 at 11:29
  • 2
    `NOLOCK` is not a magic wand. You should understand what it does before you use it. Particularly since it appears you are dealing with money. Possibly rewriting your insert query to use `NOT EXISTS` may help but you'd have to check the query plan – Nick.Mc Sep 21 '15 at 12:19
  • I have used WHERE A.AccountNo NOT IN (SELECT AccountNo FROM InterestBuffer ) instead of left join. But each execution I worked with different rows so why they block each other??? – Esty Sep 22 '15 at 03:22
  • Is the insert statement using an index on the left join that has rowlocks off? See this: http://stackoverflow.com/questions/8387024/sql-server-insert-one-row-locks-whole-table – archangel76 Sep 22 '15 at 04:44
  • Nope. AccountNo is the only index and Pk in InterestBuffer table which has Row Locks Allowed Yes. – Esty Sep 22 '15 at 04:50
  • Not sure if it solves your problem, but you could use `merge` (https://msdn.microsoft.com/en-us/library/bb510625.aspx) instead of 2 separate queries to update and insert in one go. – Me.Name Sep 22 '15 at 06:44
  • You may experiment with rowlock hint, but this may nor help either. Then you can try avoid transactions - usually transactions magnify deadlocking problems; of course you have to be more carefully handle errors in that case. – Arvo Sep 22 '15 at 06:45
  • One idea would be to try and make your query execute faster. Have you got all the relevant columns indexed, have you tried a different type of join, like NOT EXISTS rather than LEFT JOIN – jazza1000 Sep 22 '15 at 07:17
  • @Me.Name I have used merge and same result.. – Esty Sep 22 '15 at 08:41
  • @Arvo actually it will happen within transaction. My procedure will be run concurrently for different branches. – Esty Sep 22 '15 at 08:42
  • I think this problem is related to mine but I didn't understand it all. I assume first transaction is locking pages which are required for second transaction. [link](http://dba.stackexchange.com/questions/75848/sql-server-select-while-insert-on-another-transaction-gives-unexpected-results) – Esty Sep 22 '15 at 09:15
  • I'm sorry, but what exactly is your problem? Deadlock occurs? Upserting takes too much time? Solutions to these are very different. – Arvo Sep 22 '15 at 10:27
  • Please take a look at UPDATE 2 @Arvo. I am inserting branch wise different account sets in #tempInterestCalc table. I first run a transaction for branch 1 using Begin Tran only, then branch 2, then branch 5. Until I commit/rollback Branch1 transaction; branch 5 become locked. But brnach 2 is executing perfectly with branch 1. UPSERT of BRANCH 1 is blocking BRANCH 5 but not BRANCH 2. My problem is why it is blocking though I use different data for each branch?? – Esty Sep 23 '15 at 03:15
  • This is actually not a problem - no data will be lost. Two possible causes: locks on index pages; row lock escalation to page locks. You may use rowlock hint, but IMO this doesn't completely avoid index page locks. – Arvo Sep 23 '15 at 06:22
  • just my 2 cents....QUEUE the request, and execute them 1 by 1. – ken lacoste Sep 30 '15 at 10:59
  • Executing 1 by 1 is doing fine. But I need to run them concurrently means without any dependency. – Esty Sep 30 '15 at 11:06
  • Please show the queryplans – Steve Ford Oct 01 '15 at 14:38
  • Also what happens when you add OPTION (RECOMPILE) as the query optimiser knows very little about your temp table. i.e. UPDATE ... WHERE ... OPTION (RECOMPILE). Try this on both update statements – Steve Ford Oct 01 '15 at 14:40
  • @Steve Ford, I have used OPTION (RECOMPILE) but it didn't work. – Esty Oct 04 '15 at 03:19

3 Answers3

4

i) See if there is no problem in dirty read then you can use Nolock,there is no problem or you can SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED at the top of you proc.There is no problem and both are same.You should just consider "Problem of dirty read" data before using nolock.

ii) You have not explain your problem so well.what is the use of #tempInterestCalc and #temp.

iii) #tempInterestCalc get populated from where ?

iv) During insert process record of #temp B is not being use so you can remove left join and use and exists.But this depend when above points are clear.

iv) You are getting record from InterestBuffer in temp table then again updating back then again inserting in same table.This is not clear.

KumarHarsh
  • 5,046
  • 1
  • 18
  • 22
  • 2. #tempInterestCalc is a temp table where I prepare data which is required the upsert operation in InterestBuffer table. Where #temp is just a replica of InterestBuffer table which holds the data before update operation. See my answer below why I used #temp but that is a failure. 3. #tempInterestCalc is populated from other various tables including a Left Join with InterestBuffer table. 4. U r correct no need to join #tempB – Esty Oct 01 '15 at 08:04
  • Using #temp is just a try which didn't work. My requirement is to Update existing rows in InterestBuffer Table from #tempInterestBuffer and insert those doesn't exists. First I filtered Branch wise data in #tempInterestBuffer table which I inner joined with InterestBuffer table during update and insert. My question is then why transaction in a branch is blocking other branch as i filtered them in #tempInterestBuffer. – Esty Oct 01 '15 at 08:27
  • You should definitely try to add index to #tempInterestCalc. I really feel like it should solve your problem. EDITED: https://msdn.microsoft.com/en-us/library/ms174979.aspx try ALLOW_PAGE_LOCKS = OFF – GSazheniuk Oct 01 '15 at 20:26
  • In #tempInterestCalc I made AccountNo PK so by default it is clustered index and I add BranchCode as non clustered index. But no luck. – Esty Oct 04 '15 at 03:14
  • I have read that if maximum rows of a page needs to be locked for batch update then sql server locked that page. Is there any way to see which row is containing by which page or which page is going to lock and release during execution?? – Esty Oct 04 '15 at 03:24
4

You could have a potential deadlock problem because you are doing another read against the InterestBuffer table after a write. A transaction could deadlock if another has blocked part of the InterestBuffer table for the update and your transaction is trying to read from it again for the select needed to do the insert.

You said you are already left joining with InterestBuffer while calculating your #tempInterestCalc table... why not use it to cache some of the data needed from InterestBuffer so you don't have to read from it again?

Change your temp table to:

Create Table #tempInterestCalc
(
    AccountNo CHAR(17) PRIMARY KEY,
    CalculatedInterestNew MONEY,
    CalculatedInterestOld MONEY
)

You might possibly want to set repeatable read isolation level before beginning your transaction with:

SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;

It's more restrictive locking, but will prevent other transactions from trying to process the same records at the same time, which you probably need because you are combining the old and new values. Consider this scenario:

  • Transaction 1 reads data and wants to add 0.03 to existing CalculatedInterest of 5.0.
  • Transaction 2 reads data and wants to add 0.02 to the 5.0.
  • Transaction 1 updates CalculatedInterest to 5.03.
  • Transaction 2's update overwrites the values from transaction one to 5.03 (instead of adding to it and coming up with 5.05).

Maybe you don't need this if your sure that transactions will never be touching the same records, but if so read committed won't let transaction 2 read the values until transaction 1 is finished with it.

Then separate your transaction to a distinct read phase first and then a write phase:

--insert data into #tempInterestCalc and include the previous interest value
insert into #tempInterestCalc
select AccountNo, 
    Query.CalculatedInterest CalculatedInterestNew, 
    InterestBuffer.CalculatedInterest CalculatedInterestOLD
from 
    (
    ...
    ) Query
left join InterestBuffer
on Query.AccountNo = InterestBuffer.AccountNo

UPDATE A
SET A.CalculatedInterest = B.CalculatedInterestNew + B.CalculatedInterestOld
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo

INSERT INTO InterestBuffer
SELECT A.AccountNo, A.CalculatedInterestNew, 0, 0
FROM #tempInterestCalc A
--no join here needed now to read from InterestBuffer
WHERE CalculatedInterestOld is null

This shouldn't deadlock... but you could see "unnecessary" blocking due to Lock Escalation, particularly if you are updating a large number of rows. Once there are more than 5000 locks on a table it will escalate to a table. No other transactions will then be able to continue until the transaction completes. This isn't necessarily a bad thing... you just want to make sure that your transactions are as short as possible so as to not lock other transactions for too long. If lock escalation is causing you problems, there are some things you can do to mitigate this such as:

  • Breaking your transaction up to do smaller chunks of work so as to create fewer locks.
  • Ensuring you have an efficient query plan.
  • Making judicious use of lock hints.

Check your query plan and see if there are any table scan's of InterestBuffer in any statements... particularly with your initial population of #tempInterestCalc since you didn't show how you are building that.

If you will absolutely never be updating accounts in one branch at the same time, then you might consider keeping your primary key the same but changing your clustered index to Branch, Account number (order is significant). This will keep all your records of the same branch physically next to each other and will reduce the chance that your plan will do a table scan or lock pages that other transactions might need. You then can also use the PAGLOCK hints, which will encourage SQL Server to lock by page instead of row and prevent reaching the threshold to trigger lock escalation. To do this, modifying your code from UPDATE 6 in your question would look something like this:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN TRAN

Declare @BranchCode AS Char(4) = '0001'
Declare @CalculatedInterestNew MONEY = 10

CREATE TABLE #tempInterestCalc
(
    AccountNo Char(17),
    BranchCode Char(4),
    CalculatedInterestNew MONEY,
    CalculatedInterestOld MONEY
)

INSERT INTO #tempInterestCalc
SELECT A.AccountNo, A.BranchCode, ISNULL(B.CalculatedInterest, 0), B.CalculatedInterest
FROM Account A
LEFT JOIN InterestBuffer B
ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE A.BranchCode = @BranchCode

UPDATE A WITH (PAGLOCK)
SET A.CalculatedInterest = B.CalculatedInterestNew + @CalculatedInterestNew
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE A.BranchCode = @BranchCode

INSERT INTO InterestBuffer WITH (PAGLOCK)
SELECT A.AccountNo, A.BranchCode, A.CalculatedInterestNew + @CalculatedInterestNew
FROM #tempInterestCalc A
WHERE A.CalculatedInterestOld IS NULL

DROP TABLE #tempInterestCalc
--ROLLBACK
--COMMIT TRAN

Because the records are physically sorted together this should only lock a few pages... even when updating thousands of records. You could then run a transaction for branch 0003 at the same time as 0001 without any blocking issues. However you will probably have a blocking problem if you try to do an adjacent branch such as 0002 at the same time. This is because some records from branch 0001 and 0002 will probably share the same page.

If you really need to separate your branches you could look into using a Partitioned Table or Index. I don't know much about them, but it sounds like it might be useful to what you are trying to do, but it also probably comes with it's own set of complications.

Brian Pressler
  • 6,653
  • 2
  • 19
  • 40
  • Thnx. I am looking into it. – Esty Oct 07 '15 at 03:55
  • Up vote for the idea to read necessary data first then write. But SET TRANSACTION ISOLATION LEVEL REPEATABLE READ is making deadlock. See my Update 6 is question with schema and data. – Esty Oct 07 '15 at 07:24
  • Deadlock (neither transaction can commit because it's blocked by the other) or just blocking until the other finishes? – Brian Pressler Oct 07 '15 at 13:08
  • just blocking until other finishes... Is there any problem to use TRANSACTION ISOLATION LEVEL READ COMMITTED??? – Esty Oct 07 '15 at 13:20
  • I added to my answer why i recommended read committed and also another idea to try at the end. – Brian Pressler Oct 07 '15 at 13:35
  • transaction 2 will never try to update the accounts updating in transaction 1. For each transaction I will use different dataset. So I think I may use READ COMMITTED. – Esty Oct 07 '15 at 15:13
  • Your idea to remove left join with same table during INSERT was great. I will modify the index and let u know tomorrow. Its already late here. Would u please check my Update 6. I have provided a set of data and table schema. – Esty Oct 07 '15 at 15:20
  • I have changed my Index but it transaction still blocks. I have added execution plans. – Esty Oct 08 '15 at 05:18
  • How are you determining that it's still blocking? Also… have you tried running two different branches sorted further apart concurrently? – Brian Pressler Oct 08 '15 at 05:53
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/91697/discussion-between-tanjim-rahman-and-brian-pressler). – Esty Oct 08 '15 at 06:02
  • I think you are experiencing Lock Escalation because your transactions are updating a lot of rows and triggering a table lock. I added some information to my answer about Lock Escalation. – Brian Pressler Oct 09 '15 at 01:19
  • I have partitioned my table already in different file groups by branch. However is there any way to see that which of the records contained by which page and which page is currently locked??? – Esty Oct 09 '15 at 03:57
3

I have just found a solution. As I am executing query concurrently by branch so I did a slight modification in my tables as following;

Create Table InterestBuffer
(
    AccountNo CHAR(17) PRIMARY KEY,
    BranchCode CHAR(4),
    CalculatedInterest MONEY,
    ProvisionedInterest MONEY,
    AccomodatedInterest MONEY,
)

Create Table #tempInterestCalc
(
    AccountNo CHAR(17) PRIMARY KEY,
    BranchCode CHAR(4),
    CalculatedInterest MONEY
)

Now I am inserting data in #tempInterestCalc filtered by Branch.

--INSERT DATA INTO #tempInterestCalc 

SELECT * 
into #temp
FROM InterestBuffer A WITH (NOLOCK)
Where A.BranchCode = MY_BRANCH

UPDATE A
SET A.CalculatedInterest = C.CalculatedInterest + B.CalculatedInterest
FROM InterestBuffer A
INNER JOIN #tempInterestCalc B ON A.AccountNo = B.AccountNo
INNER JOIN #temp C ON A.AccountNo = C.AccountNo AND A.BranchCode = C.BranchCode

INSERT INTO InterestBuffer
SELECT A.AccountNo, A.CalculatedInterest, 0, 0
FROM #tempInterestCalc A
LEFT JOIN #temp B ON A.AccountNo = B.AccountNo AND A.BranchCode = B.BranchCode
WHERE B.AccountNo IS NULL

My problem was during update/insert I was trying to read from the same table and that was locked by other transaction write.

Using NOLOCK is safe here because data of a single branch can't be modified by another transaction but only by it's own transaction(No chance of dirty read).

Still searching for other better approaches by not using NOLOCK.

Esty
  • 1,882
  • 3
  • 17
  • 36
  • 2
    If you perform select before inserting into same table, then use `with(updlock, rowlock)` hints for select and `with(rowlock)` hint for insert. Without `updlock` hint you may run into deadlocks easily. Other problems - like waiting for concurrent transaction to finish - are usually no-problems, because you for sure have your transactions designed very short and not time-consuming :) And, considering you are freely using `nolock` hints, maybe you don't need (long) transaction at all? – Arvo Oct 01 '15 at 07:40
  • @Arvo would u please provide me a link to drill down ur statement about using UPDALOCK and rowlock with SELECT for avoiding deadlocks. I just need to learn the reason. – Esty Oct 06 '15 at 06:59
  • 1
    I have no link. But imagine scenario, when you first read, then update same row in transaction - if two of these transactions are running in parallel, then both can place read lock and will deadlock when updating. Using (updlock) forces placing write lock and second transaction will just wait before first one finishes. – Arvo Oct 06 '15 at 07:57
  • But if each transactions are working with different rows then why it would create deadlock. Means If I filtered data by Where clause with BranchCode (non clustered index) during update. – Esty Oct 06 '15 at 08:39
  • 1
    If you're not using `rowlock` hint, then SQL escalates row locks easily to page locks - and in this case these rows can reside on same page. What I don't know, does `rowlock` hint affect index page locking or not - I have seen key (index) locks at page level even while using rowlock hints in all queries. Well, this `rowlock` is only a hint - SQL server may escalate locks anyway. For handling deadlocks you have to use some try-catch logic anyway - I've learnt that in hard way :) – Arvo Oct 06 '15 at 09:59
  • Yeah I have also learned the same from last several days of study. We may use rowlock hint but in case of batch operation there is no guarantee that SQL server will honor that. It may lock the page or table if necessary. However I will share my real scenario with data soon. Please take a look to that. – Esty Oct 06 '15 at 10:53