1

I have various reasons for needing to implement, in addition to the identity column PK, a second, concurrency safe, auto-incrementing column in a SQL Server 2005 database. Being able to have more than one identity column would be ideal, but I'm looking at using a trigger to simulate this as close as possible to the metal.

I believe I have to use a serializable isolation level transaction in the trigger. Do I go about this like Ii would use such a transaction in a normal SQL query?

It is a non-negotiable requirement that the business meaning of the second incrementing column remain separated from the behind the scenes meaning of the first, PK, incrementing column.

To put things as simply as I can, if I create JobCards '0001', '0002', and '0003', then delete JobCards '0002' and '0003', the next Jobcard I create must have ID '0002', not '0004'.

ProfK
  • 49,207
  • 121
  • 399
  • 775

5 Answers5

6

Just an idea, if you have 2 "identity" columns, then surely they would be 'in sync' - if not exactly the same value, then would differ by a constant value. If so, then why not add the "second identity" column as a COMPUTED column, which offsets the primary identity? Or is my logic flawed here?

Edit : As per Martin's comment, note that your calc might need to be N * id + C, where N is the Increment and C the offset / delta - excuse my rusty maths.

For example:

ALTER TABLE MyTable ADD OtherIdentity AS Id * 2 + 1;

Edit Note that for Sql 2012 and later, that you can now use an independent sequence to create two or more independently incrementing columns in the same table.

Note: OP has edited the original requirement to include reclaiming sequences (noting that identity columns in SQL do not reclaim used ID's once deleted).

Community
  • 1
  • 1
StuartLC
  • 104,537
  • 17
  • 209
  • 285
  • 1
    Damn - you type faster than I do. +1 for excellent suggestion. – ktharsis Aug 27 '10 at 12:22
  • 1
    Maybe they have different increments? I'm sure the OP must have his reasons! – Martin Smith Aug 27 '10 at 12:23
  • 1
    But it would still be possible to calculate one from the other. I admit I'm curious about the reason as well. – Martin Smith Aug 27 '10 at 12:27
  • This method can have concurrence issues? Thanks in advance for answer. – Leandro Bardelli Mar 31 '16 at 16:00
  • 1
    @LeandroTupone Hi - without `PERSISTED`, the computed column will be evaluated at read-time, so the value will be tied to the underlying `source` column (`Id` in the example above). So as long as `Id` is stable (e.g. an `INT IDENTITY` then there shouldn't be any many issues with concurrency). But definitely, in 2012 and later, go for another `SEQUENCE` if the second value is integral. – StuartLC Mar 31 '16 at 17:10
  • Perfect answer @StuartLC thanks a lot, it's a shame I can't vote it twice! :) – Leandro Bardelli Mar 31 '16 at 17:23
  • Sorry again @StuartLC - This can be perform with bigint? I tried with sql fiddle and using this seed: 3147483648 and it works fine. But I wonder if can be any problem – Leandro Bardelli Mar 31 '16 at 18:33
  • 1
    It is fine to use [`BIGINT` as an `IDENTITY` column](http://stackoverflow.com/a/19845343/314291). Just note Sql `BIGINT` is 64 bit - it isn't the same as an arbitrary length integer in .Net or [Java](http://stackoverflow.com/questions/35883725/mapping-between-bigint-in-sql-with-biginteger-in-java). – StuartLC Mar 31 '16 at 19:37
2

I would disallow all the deletes from this table altogether. Instead of deleting, I would mark rows as available or inactive. Instead of inserting, I would first search if there are inactive rows, and reuse the one with the smallest ID if they exist. I would insert only if there are no available rows already in the table.

Of course, I would serialize all inserts and deletes with sp_getapplock.

You can use a trigger to disallow all deletes, it is simpler than filling gaps.

A-K
  • 16,804
  • 8
  • 54
  • 74
  • I am doing that, but my requirement is "if records are deleted", maybe not by the user, who can effectively only mark them Inactive, but maybe by a support person or dev. – ProfK Aug 27 '10 at 17:59
  • You can use a trigger to disallow all deletes, it is simpler than filling gaps. – A-K Aug 27 '10 at 18:46
1

A solution to this issue from "Inside Microsoft SQL Server 2008: T-SQL Querying" is to create another table with a single row that holds the current max value.

CREATE TABLE dbo.Sequence(
 val int 
 )

Then to allocate a range of sufficient size for your insert

CREATE PROC dbo.GetSequence
@val AS int OUTPUT,
@n as int =1
AS
UPDATE dbo.Sequence 
SET @val = val = val + @n;

SET @val = @val - @n + 1; 

This will block other concurrent attempts to increment the sequence until the first transaction commits.

For a non blocking solution that doesn't handle multi row inserts see my answer here.

Community
  • 1
  • 1
Martin Smith
  • 438,706
  • 87
  • 741
  • 845
0

if you never delete from the table, you could create a view with a materialized column that uses ROW_NUMBER().

ALSO, a SQL Server identity can get out of sync with a user generated one, depending on the use of rollback.

KM.
  • 101,727
  • 34
  • 178
  • 212
0

This is probably a terrible idea, but it works in at least a limited use scenario

Just use a regular identity and reseed on deletes.

create table reseedtest (
   a int identity(1,1) not null,
   name varchar(100)
)

insert reseedtest values('erik'),('john'),('selina')
select * from reseedtest

go
CREATE TRIGGER TR_reseedtest_D ON reseedtest FOR DELETE
AS
BEGIN TRAN
DECLARE @a int
SET @a = (SELECT TOP 1 a FROM reseedtest WITH (TABLOCKX, HOLDLOCK))
--anyone know another way to lock a table besides doing something to it?
DBCC CHECKIDENT(reseedtest, reseed, 0)
DBCC CHECKIDENT(reseedtest, reseed)
COMMIT TRAN
GO

delete reseedtest where a >= 2
insert reseedtest values('katarina'),('david')
select * from reseedtest

drop table reseedtest

This won't work if you are deleting from the "middle of the stack" as it were, but it works fine for deletes from the incrementing end.

Reseeding once to 0 then again is just a trick to avoid having to calculate the correct reseed value.

ErikE
  • 48,881
  • 23
  • 151
  • 196
  • It cannot work. If you delete just one row in the middle, reseed your identity, and add two rows, you will get duplicates. – A-K Aug 28 '10 at 16:49
  • @AlexKuznetsov - that's what I said, please read carefully. This may not be the best answer possible, but you have to admit it is an interesting technique. It's minimalist and solves the problem. Also, the OP has not spoken whether he needs to handle gaps in the middle or not. His given scenario only shows deleting rows at the end. – ErikE Aug 28 '10 at 17:20
  • @nonnb Somewhere a lock has to occur in order for two sessions to not calculate the same value. A table lock is unfortunate, but what happens if someone inserts between the two DBCC statements? I honestly don't know how to get the current identity value into a variable and even if I did, does DBCC CHECKIDENT accept a variable as the third parameter? – ErikE Aug 28 '10 at 17:22
  • @AlexKuznetsov I just reread what you said, and realized it's incorrect. If you delete a row in the middle and reseed the identity to the end, you won't get duplicate rows, you'll just have a hole. Look carefully at my reseed code: it auto-adjusts to start after the last-used identity value. – ErikE Aug 28 '10 at 17:34
  • @Emtucifor: agreed, it will not work because there will be a hole, not because there will be a duplicate. Did you test how your approach works under high concurrency? – A-K Aug 28 '10 at 18:38
  • @AlexKuznetsov You're not making sense. The OP didn't talk about deleting from the middle but about deleting from the end. There will be no duplicates. I didn't test under high concurrency, but I can say with confidence that it will work fine, though be blocked while the reseeding is going on. Reseeding works fine even if there are missing IDs in the middle. Could you please be more specific about what the failure will be that you think will happen? – ErikE Aug 29 '10 at 18:34
  • @AlexKuznetsov - I think I just got an idea of what you're talking about. You mean deletes at the same time? That could indeed blow things up. I don't have time now, but I'll test it later. Perhaps it would have to be an instead of update trigger to lock the table before the delete. Not sure. – ErikE Aug 29 '10 at 19:07
  • Please read carefuly what approach do you need in the other answers before take this one. For 2012 I recommend the use of SEQUENCE as @StuartLC answer specifies (it helps me in my case). I'm not saying this is wrong, please, just an advice. – Leandro Bardelli Mar 31 '16 at 19:35