19

Let's say I have a simple stored procedure that looks like this (note: this is just an example, not a practical procedure):

CREATE PROCEDURE incrementCounter AS

DECLARE @current int
SET @current = (select CounterColumn from MyTable) + 1

UPDATE
    MyTable
SET
    CounterColumn = current
GO

We're assuming I have a table called 'myTable' that contains one row, with the 'CounterColumn' containing our current count.

Can this stored procedure be executed multiple times, at the same time?

i.e. is this possible:

I call 'incrementCounter' twice. Call A gets to the point where it sets the 'current' variable (let's say it is 5). Call B gets to the point where it sets the 'current' variable (which would also be 5). Call A finishes executing, then Call B finishes. In the end, the table should contain the value of 6, but instead contains 5 due to the overlap of execution

Muhammad Hewedy
  • 29,102
  • 44
  • 127
  • 219
John
  • 17,163
  • 16
  • 65
  • 83
  • 3
    This phenomenon is called "lost update". It is "isolation" not atomicity that you need to look at. Under default read committed isolation level it **is** possible contrary to the implication in the accepted answer. – Martin Smith Feb 12 '12 at 14:34
  • A practical answer to this in SQL Server 2005 is: Use a table with an `IDENTITY` column, Insert into the table then read back the new value with `SCOPE_IDENTITY`. Then you never have a collision. – Nick.Mc Jan 30 '17 at 10:50

5 Answers5

14

This is for SQL Server.

Each statement is atomic, but if you want the stored procedure to be atomic (or any sequence of statements in general), you need to explicitly surround the statements with

BEGIN TRANSACTION
Statement ...
Statement ...
COMMIT TRANSACTION

(It's common to use BEGIN TRAN and END TRAN for short.)

Of course there are lots of ways to get into lock trouble depending what else is going on at the same time, so you may need a strategy for dealing with failed transactions. (A complete discussion of all the circumstances that might result in locks, no matter how you contrive this particular SP, is beyond the scope of the question.) But they will still be resubmittable because of the atomicity. And in my experience you'll probably be fine, without knowing about your transaction volumes and the other activities on the database. Excuse me for stating the obvious.

Contrary to a popular misconception, this will work in your case with default transaction level settings.

dkretz
  • 37,399
  • 13
  • 80
  • 138
  • 2
    Atomicity is not a cure for concurrency problems and race conditions. Modifying the OPs examle stored procedure as you have suggested will NOT tolerate concurrency because locks are acquired and escalated over time. If two clients, A & B run this procedure at the same time, you could get this pattern: A gets read lock and reads table, then releases lock. B gets read lock and reads **same value from table**, then releases lock. A gets update lock and updates, then B does same. BLAM. You're dead. You have to acquire the correct lock earlier to properly block! – ErikE Feb 12 '12 at 00:36
  • As OP says, it's a theoretical question, not a practical procedure; so I answered on his own terms. There's more wrong with this pattern than you mention, but that wasn't the question. I didn't get into how to make this useful because we shouldn't go here in the first place. If you have a better answer, add it! – dkretz Feb 12 '12 at 05:34
  • 3
    He specifically said that he wants a cure. Your answer does not cure. Wrong answers deserve downvotes until they look wrong enough that others won't be deceived. Theoretical or not, it's incomplete. – ErikE Feb 12 '12 at 07:32
  • Even if the locks are not released they are then escalated so the pattern is the same. – ErikE Feb 12 '12 at 07:33
12

In addition to placing the code between a BEGIN TRANSACTION and END TRANSACTION, you'd need to ensure that your transaction isolation level is set correctly.

For example, SERIALIZABLE isolation level will prevent lost updates when the code runs concurrently, but READ COMMITTED (the default in SQL Server Management Studio) will not.

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

As others have already mentioned, whilst ensuring consistency, this can cause blocking and deadlocks and so may not be the best solution in practice.

Dave Cluderay
  • 7,268
  • 1
  • 29
  • 28
  • `SERIALIZABLE` and `REPEATABLE READ` will prevent this but your explanation as to why [is not correct](http://dba.stackexchange.com/a/12718/3690) – Martin Smith Feb 12 '12 at 14:39
  • 2
    @Dave, I did a little more research and found that your description is inaccurate. SERIALIZABLE isolation level will not prevent other clients from reading the same value. What it WILL do if two clients read in SERIALIZABLE then one tries to update is cause a deadlock and one client connection will be terminated with prejudice. – ErikE Feb 12 '12 at 23:02
  • It csn also slow things down - a lot. – dkretz Feb 12 '12 at 23:53
1

I use this method

CREATE PROCEDURE incrementCounter
AS

DECLARE @current int

UPDATE MyTable
SET
  @current = CounterColumn = CounterColumn + 1

Return @current

this procedure do all two command at one time and it is isolate from other transaction.

Ardalan Shahgholi
  • 11,967
  • 21
  • 108
  • 144
  • Not sure it works. Maybe it does. Being a single-statement alone is clearly not enough: http://sqlperformance.com/2014/07/t-sql-queries/isolation-levels – Eugene Ryabtsev Sep 02 '15 at 04:28
0

Maybe I'm reading too much into your example (and your real situation may be significantly more complicated), but why wouldn't you just do this in a single statement?

CREATE PROCEDURE incrementCounter AS

UPDATE
    MyTable
SET
    CounterColumn = CounterColumn + 1

GO

That way, it's automatically atomic and if two updates are executued at the same time, they'll always be ordered by SQL Server so as to avoid the conflict you describe. If, however, your real situation is much more complicated, then wrapping it in a transaction is the best way to do this.

However, if another process has enabled a "less safe" isolation level (like one that allows dirty reads or non-repeatable reads), then I don't think a transaction will protect against this, as another process can see into the partially updated data if it's elected to allow unsafe reads.

SqlRyan
  • 33,116
  • 33
  • 114
  • 199
  • 1
    He may need the old value of counter column, in which case he could still take this approach and get it using the OUTPUT clause. – Code Magician Aug 04 '11 at 13:35
0

Short answer to your question is YES it can and will come up short. If you want to block concurrent execution of stored procedures start a transaction and update the same data in every execution of the stored procedure before continuing to do any work within the procedure.

CREATE PROCEDURE ..
BEGIN TRANSACTION
UPDATE mylock SET ref = ref + 1
...

This will force other concurrent executions to wait their turn since they will not be able to change 'ref' value until the other transaction(s) complete and associated update lock is lifted.

In general it is a good idea to assume result of any and all SELECT queries are stale before they are ever even executed. Using "heavy" isolation levels to workaround this unfortunate reality severely limits scalability. Much better to structure changes in a way which make optimistic assumptions about state of system you expect to exist during the update so when your assumption fail you can try again later and hope for a better outcome. For example:

UPDATE
    MyTable
SET
    CounterColumn = current 
WHERE CounterColumn = current - 1

Using your example with added WHERE clause this update does not affect any rows if assumption about its current state fails. Check @@ROWCOUNT to test number of rows and rollback or some other action as appropriate while it differs from expected outcome.