0

I autogenerate INSERT INTO statements into .sql files.

When I read in this file several times into postgresql, I get duplicated entries. In my case this is rather annoying.

I suppose this is a feature in general, but in my case I want only unique entries, so the duplicates confuse me and I would prefer to do away with them completely, even before insertion-time happens.

Is there a way to somehow tell postgresql, or through the SQL statement, that it should not insert into the data if either

(a) the exact same sequence is already in that place or (b) if some entry, like id 555, already is populated (and thus, reject any new attempts to insert into id 555)

shevy
  • 920
  • 1
  • 12
  • 18
  • 3
    Create a unique index to prevent duplicates. –  Feb 24 '14 at 15:38
  • Is there any reason why you don't use the ID as a primary key? Perhaps showing your table structure would help. – itsols Feb 24 '14 at 15:42
  • Note that a unique index will produce an error if you attempt to insert, rather than ignoring the row. AFAIK, there is no way to ignore the row automatically in Postgres, so if you'd rather treat it as a slent success and carry on, you would need to add a filter such as a `WHERE NOT EXISTS` clause to your `INSERT` – IMSoP Feb 24 '14 at 16:54

0 Answers0