47

I have a two-column primary key on a table. I have attempted to alter it to set the ignore_dup_key to on with this command:

ALTER INDEX PK_mypk on MyTable
SET (IGNORE_DUP_KEY = ON);

But I get this error:

Cannot use index option ignore_dup_key to alter index 'PK_mypk' as it enforces a primary or unique constraint.

How else should I set IGNORE_DUP_KEY to on?

gotqn
  • 42,737
  • 46
  • 157
  • 243
Mr. Flibble
  • 26,564
  • 23
  • 69
  • 100

6 Answers6

73
ALTER TABLE [TableName] REBUILD WITH (IGNORE_DUP_KEY = ON)
Aleksandr Fedorenko
  • 16,594
  • 6
  • 37
  • 44
Kvasi
  • 1,228
  • 1
  • 10
  • 14
  • It's not working with Postgresql - 9.5 PSQL doesn't understand the keyword IGNORE_DUP_KEY. Is there any way to make it (or something similar) work in Postgresql >= 9.5 – Bhindi Aug 01 '17 at 07:47
  • 4
    Don't know. This question is flagged sql-server, so try searching for similar questions for PostgreSQL. – Kvasi Aug 07 '17 at 09:22
34

It's not documented in Books Online, but I've found that while IGNORE_DUP_KEY is valid for Primary Keys, you can't change it with an ALTER INDEX; you'll have to drop and re-create the primary key.

Keep in mind that IGNORE_DUP_KEY doesn't allow you to actually store duplicate rows in a unique index, it simply changes how it fails when you try it:

ON: A warning message will occur when duplicate key values are inserted into a unique index. Only the rows violating the uniqueness constraint will fail.

OFF: An error message will occur when duplicate key values are inserted into a unique index. The entire INSERT operation will be rolled back.

From http://msdn.microsoft.com/en-us/library/ms175132.aspx

BradC
  • 39,306
  • 13
  • 73
  • 89
6

Drop the PK and then recreate it

ALTER TABLE TableName DROP CONSTRAINT PK_TableName
GO
ALTER TABLE TableName ADD CONSTRAINT PK_TableName PRIMARY KEY CLUSTERED
( MyIDColumn ASC )
WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
GO

SEE -> https://eitanblumin.com/2018/10/28/the-ignore_dup_key-option-in-primary-keys-and-unique-indexes/

Daniel C.
  • 69
  • 1
  • 1
5

It determines what happens when you insert duplicates only

See ALTER TABLE..index option

Specifies the error response when an insert operation attempts to insert duplicate key values into a unique index. The IGNORE_DUP_KEY option applies only to insert operations after the index is created or rebuilt. The option has no effect when executing CREATE INDEX, ALTER INDEX, or UPDATE.

..and it does not apply to PKs

The BOL comment for ALTER TABLE about this and "backwards compatibility" is somewhat confusing. I just tried it and BradC is correct.

CREATE TABLE dbo.foo (bar int PRIMARY KEY WITH (FILLFACTOR=90, IGNORE_DUP_KEY = ON))
GO
INSERT dbo.foo VALUES (1)
GO
INSERT dbo.foo VALUES (1)
GO
--gives    
(1 row(s) affected)

Duplicate key was ignored.

(0 row(s) affected)
gbn
  • 422,506
  • 82
  • 585
  • 676
1

Note that this setting only affects what happens if you try to insert a duplicate key, it won't allow you to insert a duplicate key.

If you're attempting to insert duplicate keys, you could drop the primary key index, insert your records, fix up the data (remove duplicates, etc.), then recreate the index.

Marcus Adams
  • 53,009
  • 9
  • 91
  • 143
1

Personally I never want it to ignore the duplicate. If there is a duplicate value to a primary key, it needs to be fixed. I don't want it ignored and the other records inserted because then the user might think that they all got inserted. This setting is a cover-up for a bad insert process. A well designed process doesn't need this setting as it cleans the data before entering it (or uses upsert to update existing and insert new ones) and sends the bad records to a table so that they can be fixed and reinserted or sends an error back to the user, so they they know their record was not inserted.

HLGEM
  • 94,695
  • 15
  • 113
  • 186
  • 3
    My issue is this: I need to insert a large number of records daily and I'm currently doing an insert command for each record. If the record already exists then it is ignored (by throwing a duplicate key error). From a performance POV it is better to attempt an insert and allow some to fail than it is to check if the record exists before inserting. The next step to improve performance is to use SqlBulkCopy to do the inserts instead of a single command for each record. But this isn't possible if any record exists as the whole batch fails if ignore_dup_key is off. – Mr. Flibble Apr 07 '10 at 22:56
  • 1
    What if one of the columns on my primary key is user defined? And what if I need to support insertions of multiple entities at the same time, say 1000? How would I validate that all entries are unique before insertion without a heavy hit to the server? I think the ignore duplicate option is quite appropriate in that situation. Do you disagree? – julealgon Jun 26 '15 at 14:05
  • 1
    Yes, you don'thave primary key if you ignore duplicates and if you don;thavea a primary key then you are going to have a world of hurt with bad data eventually. It is a better practice to put the data into a staging table and then use a select statement to insert only the new records. We insert million of records this way. – HLGEM Jun 26 '15 at 15:15
  • 2
    Just in case someone else encounters this: At least in my case, `IGNORE_DUP_KEY` is far less performant than doing `WHERE NOT EXISTS`, so make sure to check for yourself. – CWagner Aug 26 '21 at 12:21