653

I'm using Python to write to a postgres database:

sql_string = "INSERT INTO hundred (name,name_slug,status) VALUES ("
sql_string += hundred + ", '" + hundred_slug + "', " + status + ");"
cursor.execute(sql_string)

But because some of my rows are identical, I get the following error:

psycopg2.IntegrityError: duplicate key value  
  violates unique constraint "hundred_pkey"

How can I write an 'INSERT unless this row already exists' SQL statement?

I've seen complex statements like this recommended:

IF EXISTS (SELECT * FROM invoices WHERE invoiceid = '12345')
UPDATE invoices SET billed = 'TRUE' WHERE invoiceid = '12345'
ELSE
INSERT INTO invoices (invoiceid, billed) VALUES ('12345', 'TRUE')
END IF

But firstly, is this overkill for what I need, and secondly, how can I execute one of those as a simple string?

AP257
  • 89,519
  • 86
  • 202
  • 261

22 Answers22

1074

Postgres 9.5 (released since 2016-01-07) offers an "upsert" command, also known as an ON CONFLICT clause to INSERT:

INSERT ... ON CONFLICT DO NOTHING/UPDATE

It solves many of the subtle problems you can run into when using concurrent operation, which some other answers propose.

Arie
  • 11,231
  • 1
  • 15
  • 11
  • How to do something like this (Update on conflict) on Postgres 9.3? – Tushar Jain Dec 24 '16 at 02:59
  • The [Postgres 9.3 manual](https://www.postgresql.org/docs/9.3/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE) describes how you can create a function using exceptions to do this. Several other implementations can be found on the web. It is apparently very tricky though to get it right with transactions and other corner cases... – Arie Dec 25 '16 at 12:31
  • 2
    @TusharJain prior to PostgreSQL 9.5 you can do an "old fashioned" UPSERT (with CTE) but you may experience problems with race conditions and it will not be performant as 9.5 style. There is a good detail about upsert on this [blog](https://hashrocket.com/blog/posts/upsert-records-with-postgresql-9-5) (in the updated area at the bottom) including some links if you want to read more about the details. – Skyguard Apr 01 '17 at 16:03
  • 90
    For those needed, here's two simple examples. (1) INSERT if not exists else NOTHING - `INSERT INTO distributors (did, dname) VALUES (7, 'Redline GmbH') ON CONFLICT (did) DO NOTHING;` (2) INSERT if not exists else UPDATE - `INSERT INTO distributors (did, dname) VALUES (5, 'Gizmo Transglobal'), (6, 'Associated Computing, Inc') ON CONFLICT (did) DO UPDATE SET dname = EXCLUDED.dname;` These examples are from manual - https://www.postgresql.org/docs/9.5/static/sql-insert.html – AnnieFromTaiwan Apr 22 '17 at 09:10
  • 57
    There is one caveat/side effect. In a table with sequence column (serial or bigserial), even if no row is inserted the sequence is incremented at every insert attempt. – Grzegorz Luczywo Jul 02 '17 at 12:10
  • 4
    It'd be better linking to INSERT documentation instead of pointing to release. Doc link: https://www.postgresql.org/docs/9.5/static/sql-insert.html – borjagvo Nov 03 '17 at 07:36
  • Note that you cannot do [ON CONFLICT DO SELECT](https://commitfest.postgresql.org/16/1241/). – Gajus Oct 22 '18 at 15:44
  • 11
    If you need to `ON CONFLICT DO NOTHING RETURNING id`, then read this answer https://stackoverflow.com/a/42217872/368691. – Gajus Oct 22 '18 at 15:57
  • Now is there a SQL Alchemy method to upsert with on-conflict do nothing, perhaps following advice from https://stackoverflow.com/a/42217872/368691 – MrMesees Dec 06 '18 at 23:51
  • @AnnieFromTaiwan it would have helped me that you update the question with your examples! didn't want to get the credits for you! – Pipo Mar 10 '19 at 12:45
  • 2
    @GrzegorzLuczywo is there anyway around the serial being incremented? – matt Aug 22 '20 at 08:50
  • 1
    One caveat with this: ON CONFLICT won't work well with partitioned tables unless your partitioning key is part of the uniqueness you want to enforce (at least in v11). – sqldoug Dec 18 '20 at 22:18
  • @AnnieFromTaiwan Your comment really should be the accepted answer! – Maarten Derickx Jun 15 '21 at 07:36
  • This only works when the check is against the columns with unique constraints. It's not a general solution. – kaqqao Aug 11 '23 at 11:04
473

How can I write an 'INSERT unless this row already exists' SQL statement?

There is a nice way of doing conditional INSERT in PostgreSQL:

INSERT INTO example_table
    (id, name)
SELECT 1, 'John'
WHERE
    NOT EXISTS (
        SELECT id FROM example_table WHERE id = 1
    );

CAVEAT This approach is not 100% reliable for concurrent write operations, though. There is a very tiny race condition between the SELECT in the NOT EXISTS anti-semi-join and the INSERT itself. It can fail under such conditions.

Basil Bourque
  • 303,325
  • 100
  • 852
  • 1,154
John Doe
  • 9,414
  • 13
  • 50
  • 69
  • How safe is this assuming that the "name"-field has a UNIQUE constraint? Will it ever fail with unique-violation? – agnsaft Dec 29 '12 at 19:57
  • @invictus If some other new id would match the name, that is already in the database, I think it should fail, this would be an obvious behavior. – John Doe Jan 03 '13 at 17:22
  • 2
    This works fine. The only problem is the coupling I guess: what if one modifies the table such that more columns are unique. In that case all the scripts must be modified. It would be nice if there was a more generic way to do this... – Willem Van Onsem Aug 01 '14 at 13:17
  • Is there a way to do this while specifying the column names? – sudo Aug 12 '15 at 02:06
  • @JohnDoe Since this query is not 100% guaranteed safe for concurrent write, how does it compare to the solution if I write two queries: One `SELECT` to find if row exists, one `INSERT` to insert if not exist in one transaction? – tonga Sep 01 '15 at 15:00
  • Most of the times I believe we use serial ids, so id would not be on the insert clause. How would it modify the query? – Rodrigo Dec 23 '15 at 21:22
  • 1
    Is it possible to use it with `RETURNS id` for example to get the `id` whether is has been inserted or not? – Olivier Pons Jan 03 '16 at 10:58
  • 2
    @OlivierPons yes, it's possible. Add `RETURNING id` at the and of the query and it will return either a new row id or nothing, if no row has been inserted. – pumbo Feb 15 '16 at 10:08
  • This is just a kind reminder: If someone wants to avoid race conditions, she/he better not use this answer. Please refer to the second-voted answer (the one by Arie). – AnnieFromTaiwan Apr 22 '17 at 09:14
  • 8
    I have found this to be unreliable. It appears Postgres sometimes executes the insert before it has executed the select and I end up with a duplicate key violation even though the record hasn't been inserted yet. Try to use version =>9.5 with ON CONFLICT. – Michael Silver May 28 '17 at 02:38
  • I wanted to insert more than 1 row and used your code like this INSERT INTO example_table (id, name) SELECT 1, 'John' WHERE NOT EXISTS ( SELECT id FROM example_table WHERE id = 1 ) UNION SELECT 2, 'Smith' WHERE NOT EXISTS ( SELECT id FROM example_table WHERE id = 2 ) on Postgres 9.2. Do the race condition will be important if I try to do that for hundred of entries? – Ratah Aug 30 '18 at 08:17
  • I suppose you could put this into an advisory lock with the key (or one of the keys) being the id but that's still just an advisory lock. I'm trying the ON CONFLICT solution. – soger Nov 25 '21 at 17:47
62

One approach would be to create a non-constrained (no unique indexes) table to insert all your data into and do a select distinct from that to do your insert into your hundred table.

So high level would be. I assume all three columns are distinct in my example so for step3 change the NOT EXISTS join to only join on the unique columns in the hundred table.

  1. Create temporary table. See docs here.

     CREATE TEMPORARY TABLE temp_data(name, name_slug, status);
    
  2. INSERT Data into temp table.

     INSERT INTO temp_data(name, name_slug, status); 
    
  3. Add any indexes to the temp table.

  4. Do main table insert.

     INSERT INTO hundred(name, name_slug, status) 
         SELECT DISTINCT name, name_slug, status
         FROM hundred
         WHERE NOT EXISTS (
             SELECT 'X' 
             FROM temp_data
             WHERE 
                 temp_data.name          = hundred.name
                 AND temp_data.name_slug = hundred.name_slug
                 AND temp_data.status    = status
         );
    
Michael Easter
  • 23,733
  • 7
  • 76
  • 107
Kuberchaun
  • 29,160
  • 7
  • 51
  • 59
  • 3
    This is the fastest way I have found to do mass inserts when I do not know if the row already exists. – nate c Nov 18 '10 at 03:51
  • 1
    select 'X'? can someone clarify? This is simply a select statement right: `SELECT name,name_slug,status` or `*` – roberthuttinger Oct 08 '14 at 14:33
  • 6
    Lookup up correlated subquery. 'X' could be change to a 1 or even 'SadClown'. SQL requires there to be something and 'X' is a common thing to use. It's small and it makes it obvious a correlated subquery is being used and meets the requirements of of what SQL requires. – Kuberchaun Oct 09 '14 at 16:08
  • 6
    You mentioned "insert all your data into(assuming temp table) and do a select distinct from that". In that case, should it not be `SELECT DISTINCT name, name_slug, status FROM temp_data`? – gibbz00 Jun 20 '19 at 21:32
  • Would this solution be reliable for concurrent write operations? Won't we be expecting race conditions between the `INSERT` and the `SELECT` in the subquery? – RianLauw Dec 08 '21 at 11:49
  • I agree with @gibbz00 : I think `hundred` and `temp_data` are backwards? Don't we want to select distinct rows from the temp table which do not exist in the original table? – Michael Easter Jun 03 '23 at 19:23
19

Unfortunately, PostgreSQL supports neither MERGE nor ON DUPLICATE KEY UPDATE, so you'll have to do it in two statements:

UPDATE  invoices
SET     billed = 'TRUE'
WHERE   invoices = '12345'

INSERT
INTO    invoices (invoiceid, billed)
SELECT  '12345', 'TRUE'
WHERE   '12345' NOT IN
        (
        SELECT  invoiceid
        FROM    invoices
        )

You can wrap it into a function:

CREATE OR REPLACE FUNCTION fn_upd_invoices(id VARCHAR(32), billed VARCHAR(32))
RETURNS VOID
AS
$$
        UPDATE  invoices
        SET     billed = $2
        WHERE   invoices = $1;

        INSERT
        INTO    invoices (invoiceid, billed)
        SELECT  $1, $2
        WHERE   $1 NOT IN
                (
                SELECT  invoiceid
                FROM    invoices
                );
$$
LANGUAGE 'sql';

and just call it:

SELECT  fn_upd_invoices('12345', 'TRUE')
Quassnoi
  • 413,100
  • 91
  • 616
  • 614
  • 1
    Actually, this doesn't work: I can call `INSERT INTO hundred (name, name_slug, status) SELECT 'Chichester', 'chichester', NULL WHERE 'Chichester' NOT IN (SELECT NAME FROM hundred);` any number of times, and it keeps inserting the row. – AP257 Mar 09 '11 at 13:23
  • 1
    @AP257: `CREATE TABLE hundred (name TEXT, name_slug TEXT, status INT); INSERT INTO hundred (name, name_slug, status) SELECT 'Chichester', 'chichester', NULL WHERE 'Chichester' NOT IN (SELECT NAME FROM hundred); INSERT INTO hundred (name, name_slug, status) SELECT 'Chichester', 'chichester', NULL WHERE 'Chichester' NOT IN (SELECT NAME FROM hundred); SELECT * FROM hundred`. There is one record. – Quassnoi Mar 09 '11 at 16:30
  • 1
    Postgres *does* support `ON DUPLICATE KEY UPDATE`. It's called `ON CONFLICT (column) DO UPDATE SET` – kolypto Sep 20 '20 at 00:45
  • 1
    @kolypto: sure, since 2016. Feel free to edit the answer. – Quassnoi Sep 20 '20 at 00:52
19

This is exactly the problem I face and my version is 9.5

And I solve it with SQL query below.

INSERT INTO example_table (id, name)
SELECT 1 AS id, 'John' AS name FROM example_table
WHERE NOT EXISTS(
            SELECT id FROM example_table WHERE id = 1
    )
LIMIT 1;

Hope that will help someone who has the same issue with version >= 9.5.

Thanks for reading.

tuanngocptn
  • 1,241
  • 13
  • 21
  • 4
    This answer is the same as @John Doe, where the caveat of concurrent write operations has been pointed out. – RianLauw Dec 08 '21 at 11:37
  • @RianLauw: yeah right, That is the same about 80% ~ 90%. But you can use both way to find the better answer for you. – tuanngocptn Dec 09 '21 at 04:37
15

You can make use of VALUES - available in Postgres:

INSERT INTO person (name)
    SELECT name FROM person
    UNION 
    VALUES ('Bob')
    EXCEPT
    SELECT name FROM person;
  • 16
    SELECT name FROM Person <--- what if there's a billion rows in person? – Henley Apr 24 '13 at 00:34
  • 2
    I think this is a nice quick way to solve the issue, but only when you're sure the source table will never grow huge. I've got a table which will never have more than 1000 rows, so I can use this solution. – Leonard Feb 09 '15 at 08:34
  • 1
    WOW, this is exactly what I needed. I was worried I'd need to create a function or a temp table, but this precludes all that--thank you! – Alkanshel Feb 03 '16 at 22:24
  • @HenleyChiu has quite a good point. Since those selects are agains existing table maybe we can add a where clause to each ensuring we only select 'Bob''s row? – Ivaylo Slavov Sep 26 '20 at 16:43
10

I know this question is from a while ago, but thought this might help someone. I think the easiest way to do this is via a trigger. E.g.:

Create Function ignore_dups() Returns Trigger
As $$
Begin
    If Exists (
        Select
            *
        From
            hundred h
        Where
            -- Assuming all three fields are primary key
            h.name = NEW.name
            And h.hundred_slug = NEW.hundred_slug
            And h.status = NEW.status
    ) Then
        Return NULL;
    End If;
    Return NEW;
End;
$$ Language plpgsql;

Create Trigger ignore_dups
    Before Insert On hundred
    For Each Row
    Execute Procedure ignore_dups();

Execute this code from a psql prompt (or however you like to execute queries directly on the database). Then you can insert as normal from Python. E.g.:

sql = "Insert Into hundreds (name, name_slug, status) Values (%s, %s, %s)"
cursor.execute(sql, (hundred, hundred_slug, status))

Note that as @Thomas_Wouters already mentioned, the code above takes advantage of parameters rather than concatenating the string.

ktr
  • 696
  • 9
  • 15
  • If anyone else was wondering too, from the [docs](http://www.postgresql.org/docs/9.2/static/plpgsql-trigger.html): "Row-level triggers fired BEFORE can return null to signal the trigger manager to skip the rest of the operation for this row (i.e., subsequent triggers are not fired, and the INSERT/UPDATE/DELETE does not occur for this row). If a nonnull value is returned then the operation proceeds with that row value." – Pete Feb 24 '13 at 04:45
  • Exactly this answer I was looking for. Clean code, using function + trigger instead of select statement. +1 – Jacek Krawczyk Feb 03 '16 at 10:03
  • I love this answer, use function and trigger. Now I find another way to break the deadlock using functions and triggers... – Sukma Saputra May 16 '19 at 04:22
8

There is a nice way of doing conditional INSERT in PostgreSQL using WITH query: Like:

WITH a as(
select 
 id 
from 
 schema.table_name 
where 
 column_name = your_identical_column_value
)
INSERT into 
 schema.table_name
(col_name1, col_name2)
SELECT
    (col_name1, col_name2)
WHERE NOT EXISTS (
     SELECT
         id
     FROM
         a
        )
  RETURNING id 
Zoey Hewll
  • 4,788
  • 2
  • 20
  • 33
Ritesh Jha
  • 291
  • 3
  • 6
  • 1
    This is essentially the same as the answer (here https://stackoverflow.com/a/13342031/5294065) by @John Doe, where the subquery is just a CTE, right? – RianLauw Dec 08 '21 at 11:50
7

we can simplify the query using upsert

insert into invoices (invoiceid, billed) 
  values ('12345', 'TRUE') 
  on conflict (invoiceid) do 
    update set billed=EXCLUDED.billed;
Brian Burns
  • 20,575
  • 8
  • 83
  • 77
4

INSERT .. WHERE NOT EXISTS is good approach. And race conditions can be avoided by transaction "envelope":

BEGIN;
LOCK TABLE hundred IN SHARE ROW EXCLUSIVE MODE;
INSERT ... ;
COMMIT;
Pavel Francírek
  • 188
  • 2
  • 10
3

It's easy with rules:

CREATE RULE file_insert_defer AS ON INSERT TO file
WHERE (EXISTS ( SELECT * FROM file WHERE file.id = new.id)) DO INSTEAD NOTHING

But it fails with concurrent writes ...

1

The approach with the most upvotes (from John Doe) does somehow work for me but in my case from expected 422 rows i get only 180. I couldn't find anything wrong and there are no errors at all, so i looked for a different simple approach.

Using IF NOT FOUND THEN after a SELECT just works perfectly for me.

(described in PostgreSQL Documentation)

Example from documentation:

SELECT * INTO myrec FROM emp WHERE empname = myname;
IF NOT FOUND THEN
  RAISE EXCEPTION 'employee % not found', myname;
END IF;
Christoph Lösch
  • 645
  • 7
  • 22
1

psycopgs cursor class has the attribute rowcount.

This read-only attribute specifies the number of rows that the last execute*() produced (for DQL statements like SELECT) or affected (for DML statements like UPDATE or INSERT).

So you could try UPDATE first and INSERT only if rowcount is 0.

But depending on activity levels in your database you may hit a race condition between UPDATE and INSERT where another process may create that record in the interim.

bcorso
  • 45,608
  • 10
  • 63
  • 75
johnbaum
  • 664
  • 4
  • 5
1

Your column "hundred" seems to be defined as primary key and therefore must be unique which is not the case. The problem isn't with, it is with your data.

I suggest you insert an id as serial type to handly the primary key

Boodoo
  • 35
  • 1
  • 8
1

If you say that many of your rows are identical you will end checking many times. You can send them and the database will determine if insert it or not with the ON CONFLICT clause as follows

  INSERT INTO Hundred (name,name_slug,status) VALUES ("sql_string += hundred  
  +",'" + hundred_slug + "', " + status + ") ON CONFLICT ON CONSTRAINT
  hundred_pkey DO NOTHING;" cursor.execute(sql_string);
opena
  • 7
  • 3
0

In Postgres version 9.5 or higher you can use ON CONFLICT to avoid errors of contraints like @Arie mentioned above. To know more options related to this INSERT query refer to Postgres Docs. An alternative solution is by using try/catch to handle runtime errors

0

You can also use MERGE introduced in Postgres 15

Nevertheless, not as straightforward as INSERT … ON CONFLICT DO NOTHING

i000174
  • 1,107
  • 10
  • 15
0

You could use this library that manages materialisations like this.

Load your data as auto normalised tables, dlt lib infers schema, types the data and has schema evolution too https://pypi.org/project/dlt/

You can even use this library to do an upsert on your structured data afterwards, here's an example where we use an id in the json to update the sql table generated at destination

data = [{'id': 1, 'name': 'John'}]

# open connection
pipe = dlt.pipeline(destination='duckdb',
                    dataset_name='raw_data')

# Upsert/merge: Update old records, insert new
# Capture the outcome in load info
load_info = pipe.run(data,
                      write_disposition="merge",
                      primary_key="id",
                      table_name="users")
AdrianBR
  • 2,762
  • 1
  • 15
  • 29
-1
INSERT INTO invoices (invoiceid, billed) (
    SELECT '12345','TRUE' WHERE NOT EXISTS (
        SELECT 1 FROM invoices WHERE invoiceid='12345' AND billed='TRUE'
        )
)
buddemat
  • 4,552
  • 14
  • 29
  • 49
-2

I was looking for a similar solution, trying to find SQL that work work in PostgreSQL as well as HSQLDB. (HSQLDB was what made this difficult.) Using your example as a basis, this is the format that I found elsewhere.

sql = "INSERT INTO hundred (name,name_slug,status)"
sql += " ( SELECT " + hundred + ", '" + hundred_slug + "', " + status
sql += " FROM hundred"
sql += " WHERE name = " + hundred + " AND name_slug = '" + hundred_slug + "' AND status = " + status
sql += " HAVING COUNT(*) = 0 );"
Jeff Fairley
  • 8,071
  • 7
  • 46
  • 55
-2

Here is a generic python function that given a tablename, columns and values, generates the upsert equivalent for postgresql.

import json

def upsert(table_name, id_column, other_columns, values_hash):

    template = """
    WITH new_values ($$ALL_COLUMNS$$) as (
      values
         ($$VALUES_LIST$$)
    ),
    upsert as
    (
        update $$TABLE_NAME$$ m
            set
                $$SET_MAPPINGS$$
        FROM new_values nv
        WHERE m.$$ID_COLUMN$$ = nv.$$ID_COLUMN$$
        RETURNING m.*
    )
    INSERT INTO $$TABLE_NAME$$ ($$ALL_COLUMNS$$)
    SELECT $$ALL_COLUMNS$$
    FROM new_values
    WHERE NOT EXISTS (SELECT 1
                      FROM upsert up
                      WHERE up.$$ID_COLUMN$$ = new_values.$$ID_COLUMN$$)
    """

    all_columns = [id_column] + other_columns
    all_columns_csv = ",".join(all_columns)
    all_values_csv = ','.join([query_value(values_hash[column_name]) for column_name in all_columns])
    set_mappings = ",".join([ c+ " = nv." +c for c in other_columns])

    q = template
    q = q.replace("$$TABLE_NAME$$", table_name)
    q = q.replace("$$ID_COLUMN$$", id_column)
    q = q.replace("$$ALL_COLUMNS$$", all_columns_csv)
    q = q.replace("$$VALUES_LIST$$", all_values_csv)
    q = q.replace("$$SET_MAPPINGS$$", set_mappings)

    return q


def query_value(value):
    if value is None:
        return "NULL"
    if type(value) in [str, unicode]:
        return "'%s'" % value.replace("'", "''")
    if type(value) == dict:
        return "'%s'" % json.dumps(value).replace("'", "''")
    if type(value) == bool:
        return "%s" % value
    if type(value) == int:
        return "%s" % value
    return value


if __name__ == "__main__":

    my_table_name = 'mytable'
    my_id_column = 'id'
    my_other_columns = ['field1', 'field2']
    my_values_hash = {
        'id': 123,
        'field1': "john",
        'field2': "doe"
    }
    print upsert(my_table_name, my_id_column, my_other_columns, my_values_hash)
Patrick
  • 1,302
  • 2
  • 13
  • 22
-18

The solution in simple, but not immediatly.
If you want use this instruction, you must make one change to the db:

ALTER USER user SET search_path to 'name_of_schema';

after these changes "INSERT" will work correctly.

el fuser
  • 606
  • 1
  • 6
  • 10