I have a transaction that contains multiple SQL Statements (INSERT, UPDATE and/or DELETES). When executing, I want to ignore Duplicate Error statements and continue onto the next statement. What's the best way of doing that?
12 Answers
I think you are looking for the IGNORE_DUP_KEY option on your index. Have a look at IGNORE_DUP_KEY ON option documented at http://msdn.microsoft.com/en-us/library/ms186869.aspx which causes duplicate insertion attempts to produce a warning instead of an error.

- 3,475
- 6
- 29
- 45
-
44@Jonathan Leffler: `IGNORE_DUP_KEY` won't cause duplicate data. It causes SQL Server to ignore the duplicate key: not inserting it into the database. – Ian Boyd Feb 03 '10 at 15:59
-
@Ian Boyd: OK - living and learning is part of SO culture. Thanks for the information. (I'm not sure whether I could write reliable code if an INSERT succeeded without inserting what I asked it to insert and without telling me; presumably, there is a warning passed back, at least. However, I see what the feature does.) – Jonathan Leffler Feb 03 '10 at 16:21
-
9@Jonathan Leffler: It's not your fault for mis-understanding the feature. It's is horribly documented: "*If you create a unique index, you can set this option to ensure each value in an indexed column is unique.*". For years i thought the option was the difference between an error being raised right when you insert, or delayed when you call COMMIT. Turns out it's neither of those. It is better to be named "Ignore, skip, and do not insert, duplicate rows" – Ian Boyd Feb 03 '10 at 18:14
-
2@Ian Boyd & Canoehead thanks, indeed it's not well documented and many don't even know it existed let alone how it worked :) – Stefan Rogin May 07 '12 at 15:42
-
4A have a many-to-many relation, products and categories. I have a table with just product_id and category_id and an unique index. With IGNORE_DUP_KEY i can just add products(a,b,c) to categoies(x,y,x) without worrying about if some of the products are already in one of the categories. I just care about the final result. – Leif Neland Apr 22 '15 at 10:23
-
2@LeifNeland - I agree. Too many people are quick to dismiss this as a bad idea *in all cases*. There are always legitimate cases where it's not at all unwise to just skip duplicate inserts. Cases that would require more than the character limit of a comment to explain clearly... – AaronHS Jun 15 '16 at 23:52
-
1duplicate key is now better documented: https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-table-index-option-transact-sql?redirectedfrom=MSDN&view=sql-server-ver15#:~:text=A%20warning%20message%20occurs%20when%20duplicate%20key%20values%20are%20inserted%20into%20a%20unique%20index.%20Only%20the%20rows%20violating%20the%20uniqueness%20constraint%20fail. – Qua285 Aug 09 '21 at 06:50
-
@AaronHS Very true. IGNORE_DUP_KEY is the exact statement I was looking for. My case is moving user preferences to a new database and I don't care of some preferences have already been set in the new DB. I just want to insert the ones that haven't been created – Freeman Helmuth Feb 24 '22 at 13:19
Expanding on your comment to SquareCog's reply, you could do:
INSERT INTO X VALUES(Y,Z) WHERE Y NOT IN (SELECT Y FROM X)
INSERT INTO X2 VALUES(Y2,Z2) WHERE Y2 NOT IN (SELECT Y FROM X2)
INSERT INTO X3 VALUES(Y3,Z3) WHERE Y3 NOT IN (SELECT Y FROM X3)
Here, I assume that column Y is present in all three tables. Note that performance will be poor if the tables are not indexed on Y.
Oh yeah, Y has a unique constraint on it--so they're indexed, and this should perform optimally.

- 39,426
- 11
- 57
- 92
-
2I like this - our deployment scripts are designed to be run many times safely, so INSERT's always have criteria to avoid collisions. – n8wrl Jul 16 '09 at 18:06
-
3I tried this approach but my MSSQL Server (2008 R2) comaplains about "Incorrect syntax near the keyword 'WHERE'" - it doesn't seem to like WHERE following INSERT. Are you sure your solution is correct? – Dai Jun 02 '12 at 19:16
-
Yes, I am. Post your code, preferrably as a new question, and someone will help you figure out the problem. – Philip Kelley Jun 04 '12 at 14:09
-
Is it not possible for you to still get duplicate key errors because you haven’t specified [`WITH (HOLDLOCK, UPDLOCK)`](http://stackoverflow.com/a/3791506/429091) ([example](http://stackoverflow.com/a/3407890/429091))? – binki Sep 24 '13 at 19:09
-
I'd have to test, but I am reasonably confident that some form up lock is placed on the rows being scanned as part of the "first" insert statement that would (temporarily) interfere with the reads or changes of a second modificatino statement. – Philip Kelley Sep 24 '13 at 19:53
-
2@Dai I had the same issue when using SQL Azure. I am not sure if this is a limitation of SQL Azure or not. This is what I ended up using instead: INSERT into test (col1, col2) SELECT DISTINCT 1, 'a' FROM test WHERE NOT EXISTS (SELECT col1 FROM test WHERE col1 = 1) – Cotega Mar 21 '14 at 22:20
-
I do not understand this: is "Y" a value, as in `VALUES(Y,Z)` or is "Y" a column name, as in `SELECT Y FROM X`? – dumbledad Jun 28 '16 at 10:35
-
Depends on context. In the values clause, Y would be a specific value; elsewhere, it's a column name. (I based this on the referenced text.) – Philip Kelley Jun 28 '16 at 14:58
-
"I am reasonably confident that some form up lock is placed on the rows being scanned as part of the "first" insert statement that would (temporarily) interfere with the reads or changes of a second modificatino statement." For the record, this isn't the case. The read will use the ambient isolation level -- unless it's SERIALIZABLE, then nothing prevents some other connection from inserting them anew after the read. This also applies to MERGE statements by the way. – Mark Sowul Mar 14 '19 at 16:23
Although my emphatic advice to you is to structure your sql so as to not attempt duplicate inserts (Philip Kelley's snippet is probably what you need), I want to mention that an error on a statement doesn't necessarily cause a rollback.
Unless XACT_ABORT
is ON
, a transaction will not automatically rollback if an error is encountered unless it's severe enough to kill the connection. XACT_ABORT
defaults to OFF
.
For example, the following sql successfully inserts three values into the table:
create table x ( y int not null primary key )
begin transaction
insert into x(y)
values(1)
insert into x(y)
values(2)
insert into x(y)
values(2)
insert into x(y)
values(3)
commit
Unless you're setting XACT_ABORT
, an error is being raised on the client and causing the rollback. If for some horrible reason you can't avoid inserting duplicates, you ought to be able to trap the error on the client and ignore it.

- 1,898
- 1
- 15
- 15
If by "Ignore Duplicate Error statments", to abort the current statement and continue to the next statement without aborting the trnsaction then just put BEGIN TRY.. END TRY around each statement:
BEGIN TRY
INSERT ...
END TRY
BEGIN CATCH /*required, but you dont have to do anything */ END CATCH
...

- 181
- 4
-
Unfortunately, we're using SQL Server 2000 right now so I don't think the Try .. Catch statements are available in that version. – Kingamoon Jul 16 '09 at 17:45
I'd like to chime in with the following:
If 99% of your data is going to insert without error doing a select beforehand results in a huge performance drop (like, in my case, from 200 lines/sec to 20 lines/sec) compared to "dumb" inserts and catching the occasional error.
After ignoring the "Violation of PRIMARY KEY constraint" errors things went back to being bottlenecked by other resources (headroom being defined as "what the bottlenecking resources don't have").
Which is the whole reason I landed on this discussion in the first place.

- 3,267
- 2
- 22
- 25
OK. After trying out some error handling, I figured out how to solve the issue I was having.
Here's an example of how to make this work (let me know if there's something I'm missing) :
SET XACT_ABORT OFF ; -- > really important to set that to OFF
BEGIN
DECLARE @Any_error int
DECLARE @SSQL varchar(4000)
BEGIN TRANSACTION
INSERT INTO Table1(Value1) VALUES('Value1')
SELECT @Any_error = @@ERROR
IF @Any_error<> 0 AND @Any_error<>2627 GOTO ErrorHandler
INSERT INTO Table1(Value1) VALUES('Value1')
SELECT @Any_error = @@ERROR
IF @Any_error<> 0 AND @Any_error<>2627 GOTO ErrorHandler
INSERT INTO Table1(Value1) VALUES('Value2')
SELECT @Any_error = @@ERROR
IF @Any_error<> 0 AND @Any_error<>2627 GOTO ErrorHandler
ErrorHandler:
IF @Any_error = 0 OR @Any_error=2627
BEGIN
PRINT @ssql
COMMIT TRAN
END
ELSE
BEGIN
PRINT @ssql
ROLLBACK TRAN
END
END
As a result of the above Transaction, Table1 will have the following values Value1, Value2.
2627 is the error code for Duplicate Key by the way.
Thank you all for the prompt reply and helpful suggestions.

- 59,219
- 41
- 137
- 210

- 1,467
- 1
- 18
- 32
Use IGNORE_DUP_KEY = OFF
during primary key definition to ignore the duplicates while insert.
for example
create table X( col1.....)
CONSTRAINT [pk_X] PRIMARY KEY CLUSTERED
(
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 70) ON [PRIMARY]
) ON [PRIMARY]
I came here because I was trying to do the same thing; I knew I had dupes in the source data but only wanted to update the target data and not add the dupes.
I think a MERGE works great here because you can UPDATE or DELETE things that are different and INSERT things that are missing.
I ended up doing this and it worked great. I use SSIS to loop through Excel files and load them into a "RAW" SQL table with dupes and all. Then I run a MERGE to merge the "raw" table with the production table. Then I TRUNCATE the "raw" table and move to the next Excel file.

- 11
- 1
-
1If you're loading into this table a lot, why not just clean up the dupes by loading the current table into a new table with a unique index with IGNORE_DUP_KEY = ON. Delete your original table and rename the new table. The you could directly load into the dupe-free table. Seems like it would save a great deal of time in the future for minimal effort. – Mark Jul 14 '16 at 19:04
For SQL server 2000:
INSERT INTO t1 (ID, NAME)
SELECT valueid, valuename
WHERE NOT EXISTS
(SELECT 0
FROM t1 as t2
WHERE t2.ID = valueid AND t2.name = valuename)

- 308
- 2
- 13
INSERT INTO KeyedTable(KeyField, Otherfield)
SELECT n.* FROM
(SELECT 'PossibleDupeLiteral' AS KeyField, 'OtherfieldValue' AS Otherfield
UNION ALL
SELECT 'PossibleDupeLiteral', 'OtherfieldValue2'
)
LEFT JOIN KeyedTable k
ON k.KeyField=n.KeyField
WHERE k.KeyField IS NULL
This tells the Server to look for the same data (hopefully the same speedy way it does to check for duplicate keys) and insert nothing if it finds it.
I like the IGNORE_DUP_KEY solution too, but then anyone who relies on errors to catch problems will be mystified when the server silently ignores their dupe-key errors.
The reason I choose this over Philip Kelley's solution is that you can provide several rows of data and only have the missing ones actually get in:

- 498
- 5
- 17
Well you could solve this with a temp table..
DECLARE @RoleToAdds TABLE
([RoleID] int, [PageID] int)
INSERT INTO @RoleToAdds ([RoleID], [PageID])
VALUES
(1, 2),
(1, 3),
(1, 4),
(2, 5)
INSERT INTO [dbo].[RolePages] ([RoleID], [PageID])
SELECT rta.[RoleID], rta.[PageID] FROM @RoleToAdds rta WHERE NOT EXISTS
(SELECT * FROM [RolePages] rp WHERE rp.PageID = rta.PageID AND rp.RoleID = rta.RoleID)
This might not work for big amounts of data but for a few rows it should work!

- 37,042
- 39
- 142
- 198
Keys must be unique. Don't do that. Redesign as needed.
(if you are trying to insert, then delete, and the insert fails... just do the delete first. Rollback on error in either statement).

- 19,421
- 8
- 49
- 63
-
No I don't think I made my question clear. INSERT INTO X VALUES(Y,Z) INSERT INTO X2 VALUES(Y2,Z2) INSERT INTO X3 VALUES(Y3,Z3) Let's say the second statement causes a duplicate error. I want to ignore it and continue execute the 3rd statement. The behavior in a Transaction is, it throws an error and it exists (Rolls back). – Kingamoon Jul 16 '09 at 17:38