32

I'd like to append to an existing table, using pandas df.to_sql() function.

I set if_exists='append', but my table has primary keys.

I'd like to do the equivalent of insert ignore when trying to append to the existing table, so I would avoid a duplicate entry error.

Is this possible with pandas, or do I need to write an explicit query?

ryantuck
  • 6,146
  • 10
  • 57
  • 71
  • possible duplicate of [Appending Pandas dataframe to sqlite table by primary key](http://stackoverflow.com/questions/23574614/appending-pandas-dataframe-to-sqlite-table-by-primary-key) – maxymoo May 20 '15 at 00:15

13 Answers13

39

There is unfortunately no option to specify "INSERT IGNORE". This is how I got around that limitation to insert rows into that database that were not duplicates (dataframe name is df)

for i in range(len(df)):
    try:
        df.iloc[i:i+1].to_sql(name="Table_Name",if_exists='append',con = Engine)
    except IntegrityError:
        pass #or any other action
NFern
  • 1,706
  • 17
  • 18
  • 2
    dont forget to add `if_exists='append'` as a parameter – theStud54 Dec 18 '16 at 18:38
  • 21
    this solves the problem,...but it slows down the query VEEEEEERY MUCH – miro May 15 '17 at 22:41
  • 4
    For those using sqlalchemy, this is what worked for me: Adding this import: `from sqlalchemy import exc` and changing the exception to this: `except exc.IntegrityError as e:`. Like @miro said, it does slow down the process by a lot. – Halee Oct 14 '19 at 20:25
  • What if there are columns like `created_at` and `updated_at` in the table that are auto-filled. this approach doesnt work then! – DirtyBit Dec 09 '20 at 14:33
17

You can do this with the method parameter of to_sql:

from sqlalchemy.dialects.mysql import insert

def insert_on_duplicate(table, conn, keys, data_iter):
    insert_stmt = insert(table.table).values(list(data_iter))
    on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(insert_stmt.inserted)
    conn.execute(on_duplicate_key_stmt)

df.to_sql('trades', dbConnection, if_exists='append', chunksize=4096, method=insert_on_duplicate)

for older versions of sqlalchemy, you need to pass a dict to on_duplicate_key_update. i.e., on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(dict(insert_stmt.inserted))

Jayen
  • 5,653
  • 2
  • 44
  • 65
  • got an error raise ValueError("update parameter must be a non-empty dictionary") ValueError: update parameter must be a non-empty dictionary – Huy Tran Oct 20 '21 at 14:04
  • @HuyTran i'm not sure why you would get that. does the db table exist already? does your dataframe's columns match the table's columns? – Jayen Oct 20 '21 at 22:21
  • @HuyTran what version of pandas are you using? – Jayen Nov 02 '21 at 11:29
  • Hi @Jayden, panda=v1.2.1, sqlalchmy=1.3.22 I found the error to be the panda table.table and insert dialects. It seemed the ValueError referred to insert() requiring a table object instead of a string. – Huy Tran Nov 02 '21 at 14:55
  • @HuyTran if you have some different code, can you edit my answer to clarify? i recently tried this on sqlalchemy 1.3.22 but that version's `on_duplicate_key_update` doesn't accept `ColumnCollection` and i had to create a `dict`. – Jayen Nov 02 '21 at 23:06
  • @jayen Can you please explain your answer? For example, on how `insert_stmt.inserted` behaves? I intend to use your function, but want slightly different behavior. This function seems to be causing issue like this: https://dba.stackexchange.com/questions/60295/why-does-auto-increment-jumps-by-more-than-the-number-of-rows-inserted – Grimlock Apr 04 '22 at 20:24
  • @Grimlock see the "tip" on https://docs.sqlalchemy.org/en/14/dialects/mysql.html?highlight=inserted#sqlalchemy.dialects.mysql.Insert.inserted . tbh i don't think this should affect the auto-increment but i don't really know. – Jayen Apr 05 '22 at 01:17
  • This should really be the accepted answer at this point. It's still a relevant google result for this problem. – Trent Yarosevich Jul 18 '22 at 00:25
  • 1
    usually i see someone modify the top answer, and then people comment that my lower-rated, later answer is a duplicate. – Jayen Jul 18 '22 at 06:28
8

please note that the "if_exists='append'" related to the existing of the table and what to do in case the table not exists. The if_exists don't related to the content of the table. see the doc here: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html

if_exists : {‘fail’, ‘replace’, ‘append’}, default ‘fail’ fail: If table exists, do nothing. replace: If table exists, drop it, recreate it, and insert data. append: If table exists, insert data. Create if does not exist.

user8557323
  • 89
  • 1
  • 1
5

Pandas has no option for it currently, but here is the Github issue. If you need this feature too, just upvote for it.

gies0r
  • 4,723
  • 4
  • 39
  • 50
5

The for loop method above slow things down significantly. There's a method parameter you can pass to panda.to_sql to help achieve customization for your sql query

https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html#pandas.DataFrame.to_sql

The below code should work for postgres and do nothing if there's a conflict with primary key "unique_code". Change your insert dialects for your db.

def insert_do_nothing_on_conflicts(sqltable, conn, keys, data_iter):
    """
    Execute SQL statement inserting data

    Parameters
    ----------
    sqltable : pandas.io.sql.SQLTable
    conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
    keys : list of str
        Column names
    data_iter : Iterable that iterates the values to be inserted
    """
    from sqlalchemy.dialects.postgresql import insert
    from sqlalchemy import table, column
    columns=[]
    for c in keys:
        columns.append(column(c))

    if sqltable.schema:
        table_name = '{}.{}'.format(sqltable.schema, sqltable.name)
    else:
        table_name = sqltable.name

    mytable = table(table_name, *columns)

    insert_stmt = insert(mytable).values(list(data_iter))
    do_nothing_stmt = insert_stmt.on_conflict_do_nothing(index_elements=['unique_code'])

    conn.execute(do_nothing_stmt)

pd.to_sql('mytable', con=sql_engine, method=insert_do_nothing_on_conflicts)
Huy Tran
  • 367
  • 4
  • 12
2

Pandas doesn't support editing the actual SQL syntax of the .to_sql method, so you might be out of luck. There's some experimental programmatic workarounds (say, read the Dataframe to a SQLAlchemy object with CALCHIPAN and use SQLAlchemy for the transaction), but you may be better served by writing your DataFrame to a CSV and loading it with an explicit MySQL function.

CALCHIPAN repo: https://bitbucket.org/zzzeek/calchipan/

manglano
  • 844
  • 1
  • 7
  • 21
  • https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.24.0.html#other-enhancements pandas.DataFrame.to_sql() has gained the method argument to control SQL insertion clause. See the insertion method section in the documentation. (GH8953) – Jayen Oct 03 '21 at 02:19
1

I had trouble where I was still getting the IntegrityError enter image description here

...strange but I just took the above and worked it backwards:

for i, row in df.iterrows():
    sql = "SELECT * FROM `Table_Name` WHERE `key` = '{}'".format(row.Key)
    found = pd.read_sql(sql, con=Engine)
    if len(found) == 0:
        df.iloc[i:i+1].to_sql(name="Table_Name",if_exists='append',con = Engine)
kztd
  • 3,121
  • 1
  • 20
  • 18
1

In my case, I was trying to insert new data in an empty table, but some of the rows are duplicated, almost the same issue here, I "may" think about fetching existing data and merge with the new data I got and continue in process, but this is not optimal, and may work only for small data, not a huge tables.

As pandas do not provide any kind of handling for this situation right now, I was looking for a suitable workaround for this, so I made my own, not sure if that will work or not for you, but I decided to control my data first instead of luck of waiting if that worked or not, so what I did is removing duplicates before I call .to_sql so if any error happens, I know more about my data and make sure I know what is going on:

import pandas as pd


def write_to_table(table_name, data):
    df = pd.DataFrame(data)
    # Sort by price, so we remove the duplicates after keeping the lowest only
    data.sort(key=lambda row: row['price'])
    df.drop_duplicates(subset=['id_key'], keep='first', inplace=True)
    #
    df.to_sql(table_name, engine, index=False, if_exists='append', schema='public')

So in my case, I wanted to keep the lowest price of rows (btw I was passing an array of dict for data), and for that, I did sorting first, not necessary but this is an example of what I mean with control the data that I want to keep.

I hope this will help someone who got almost the same as my situation.

Al-Mothafar
  • 7,949
  • 7
  • 68
  • 102
0

When you use SQL Server you'll get a SQL error when you enter a duplicate value into a table that has a primary key constraint. You can fix it by altering your table:

CREATE TABLE [dbo].[DeleteMe](
[id] [uniqueidentifier] NOT NULL,
[Value] [varchar](max) NULL,
CONSTRAINT [PK_DeleteMe] 
PRIMARY KEY ([id] ASC) 
WITH (IGNORE_DUP_KEY = ON)); <-- add

Taken from https://dba.stackexchange.com/a/111771.

Now your df.to_sql() should work again.

Rens
  • 489
  • 5
  • 11
0

The solutions by Jayen and Huy Tran helped me a lot, but they didn't work straight out of the box. The problem I faced with Jayen code is that it requires that the DataFrame columns be exactly as those of the database. This was not true in my case as there were some DataFrame columns that I won't write to the database.
I modified the solution so that it considers the column names.

from sqlalchemy.dialects.mysql import insert
import itertools

def insertWithConflicts(sqltable, conn, keys, data_iter):
    """
    Execute SQL statement inserting data, whilst taking care of conflicts
    Used to handle duplicate key errors during database population
    This is my modification of the code snippet 
    from https://stackoverflow.com/questions/30337394/pandas-to-sql-fails-on-duplicate-primary-key

    The help page from https://docs.sqlalchemy.org/en/14/core/dml.html#sqlalchemy.sql.expression.Insert.values
    proved useful.
    
    Parameters
    ----------
    sqltable : pandas.io.sql.SQLTable
    conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
    keys : list of str
        Column names
    data_iter : Iterable that iterates the values to be inserted. It is a zip object.
                The length of it is equal to the chunck size passed in df_to_sql()
    """
    vals = [dict(zip(z[0],z[1])) for z in zip(itertools.cycle([keys]),data_iter)] 
    insertStmt = insert(sqltable.table).values(vals)
    doNothingStmt = insertStmt.on_duplicate_key_update(dict(insertStmt.inserted))
    conn.execute(doNothingStmt)
hnagaty
  • 796
  • 5
  • 13
0

I faced the same issue and I adopted the solution provided by @Huy Tran for a while, until my tables started to have schemas. I had to improve his answer a bit and this is the final result:

def do_nothing_on_conflicts(sql_table, conn, keys, data_iter):
"""
Execute SQL statement inserting data

Parameters
----------
sql_table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
    Column names
data_iter : Iterable that iterates the values to be inserted
"""
columns = []
for c in keys:
    columns.append(column(c))

if sql_table.schema:
    my_table = table(sql_table.name, *columns, schema=sql_table.schema)
    # table_name = '{}.{}'.format(sql_table.schema, sql_table.name)
else:
    my_table = table(sql_table.name, *columns)
    # table_name = sql_table.name

# my_table = table(table_name, *columns)

insert_stmt = insert(my_table).values(list(data_iter))
do_nothing_stmt = insert_stmt.on_conflict_do_nothing()

conn.execute(do_nothing_stmt)

How to use it:

history.to_sql('history', schema=schema, con=engine, method=do_nothing_on_conflicts)
0

The idea is the same as @Nfern's but uses recursive function to divide the df into half in each iteration to skip the row/rows causing the integrity violation.

        def insert(df):

          try:
             # inserting into backup table
             df.to_sql("table",con=engine, if_exists='append',index=False,schema='schema') 
         except:
            rows = df.shape[0]
            if rows>1:
                df1 = df.iloc[:int(rows/2),:]
                df2 = df.iloc[int(rows/2):,:]
                insert(df1)
                insert(df2)
            else:
                print(f"{df} not inserted. Integrity violation, duplicate primary key/s")
0

Same as @Jayen but for postgresql and do nothing on conflict logic (See sqlalchemy doc)

from sqlalchemy.dialects.postgresql import insert

def insert_or_do_nothing_on_conflict(table, conn, keys, data_iter):
        insert_stmt = insert(table.table).values(list(data_iter))
        # you need to specify column(s) name(s) used to infer unique index
        on_duplicate_key_stmt = insert_stmt.on_conflict_do_nothing(index_elements=['column_index1', 'column_index2'])
        conn.execute(on_duplicate_key_stmt)


df.to_sql(
    name="table_name",
    schema="schema_name",
    con=engine,
    if_exists="append",
    index=False,
    method=insert_or_do_nothing_on_conflict
)
user3793803
  • 336
  • 4
  • 7