I've scraped some data from web sources and stored it all in a pandas DataFrame. Now, in order harness the powerful db tools afforded by SQLAlchemy, I want to convert said DataFrame into a Table() object and eventually upsert all data into a PostgreSQL table. If this is practical, what is a workable method of going about accomplishing this task?
5 Answers
Update: You can save yourself some typing by using this method.
If you are using PostgreSQL 9.5 or later you can perform the UPSERT using a temporary table and an INSERT ... ON CONFLICT
statement:
import sqlalchemy as sa
# …
with engine.begin() as conn:
# step 0.0 - create test environment
conn.exec_driver_sql("DROP TABLE IF EXISTS main_table")
conn.exec_driver_sql(
"CREATE TABLE main_table (id int primary key, txt varchar(50))"
)
conn.exec_driver_sql(
"INSERT INTO main_table (id, txt) VALUES (1, 'row 1 old text')"
)
# step 0.1 - create DataFrame to UPSERT
df = pd.DataFrame(
[(2, "new row 2 text"), (1, "row 1 new text")], columns=["id", "txt"]
)
# step 1 - create temporary table and upload DataFrame
conn.exec_driver_sql(
"CREATE TEMPORARY TABLE temp_table AS SELECT * FROM main_table WHERE false"
)
df.to_sql("temp_table", conn, index=False, if_exists="append")
# step 2 - merge temp_table into main_table
conn.exec_driver_sql(
"""\
INSERT INTO main_table (id, txt)
SELECT id, txt FROM temp_table
ON CONFLICT (id) DO
UPDATE SET txt = EXCLUDED.txt
"""
)
# step 3 - confirm results
result = conn.exec_driver_sql("SELECT * FROM main_table ORDER BY id").all()
print(result) # [(1, 'row 1 new text'), (2, 'new row 2 text')]

- 116,920
- 32
- 215
- 418
-
2Rather than having the schema for "main table" in your code twice, you can create your temporary table like this; ` CREATE TEMPORARY TABLE temp_table AS SELECT * FROM main_table WHERE false` – rotten Apr 07 '22 at 21:29
-
thank you GordThompson this is perfect. Good job on the match_column addition (for cases where unique constraints are different from the index). I was using a delete/insert using COPY and this method is giving smilar perf. This is safer and shorter. Small suggestion: drop the temp table in the end, and give the temp table a unique name like @pedrovgp does below. – Courvoisier Sep 05 '22 at 09:22
-
@GordThompson will this work for mysql too? – Nicholas Hansen-Feruch Sep 22 '22 at 23:49
I have needed this so many times, I ended up creating a gist for it.
The function is below, it will create the table if it is the first time persisting the dataframe and will update the table if it already exists:
import pandas as pd
import sqlalchemy
import uuid
import os
def upsert_df(df: pd.DataFrame, table_name: str, engine: sqlalchemy.engine.Engine):
"""Implements the equivalent of pd.DataFrame.to_sql(..., if_exists='update')
(which does not exist). Creates or updates the db records based on the
dataframe records.
Conflicts to determine update are based on the dataframes index.
This will set unique keys constraint on the table equal to the index names
1. Create a temp table from the dataframe
2. Insert/update from temp table into table_name
Returns: True if successful
"""
# If the table does not exist, we should just use to_sql to create it
if not engine.execute(
f"""SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = '{table_name}');
"""
).first()[0]:
df.to_sql(table_name, engine)
return True
# If it already exists...
temp_table_name = f"temp_{uuid.uuid4().hex[:6]}"
df.to_sql(temp_table_name, engine, index=True)
index = list(df.index.names)
index_sql_txt = ", ".join([f'"{i}"' for i in index])
columns = list(df.columns)
headers = index + columns
headers_sql_txt = ", ".join(
[f'"{i}"' for i in headers]
) # index1, index2, ..., column 1, col2, ...
# col1 = exluded.col1, col2=excluded.col2
update_column_stmt = ", ".join([f'"{col}" = EXCLUDED."{col}"' for col in columns])
# For the ON CONFLICT clause, postgres requires that the columns have unique constraint
query_pk = f"""
ALTER TABLE "{table_name}" DROP CONSTRAINT IF EXISTS unique_constraint_for_upsert;
ALTER TABLE "{table_name}" ADD CONSTRAINT unique_constraint_for_upsert UNIQUE ({index_sql_txt});
"""
engine.execute(query_pk)
# Compose and execute upsert query
query_upsert = f"""
INSERT INTO "{table_name}" ({headers_sql_txt})
SELECT {headers_sql_txt} FROM "{temp_table_name}"
ON CONFLICT ({index_sql_txt}) DO UPDATE
SET {update_column_stmt};
"""
engine.execute(query_upsert)
engine.execute(f"DROP TABLE {temp_table_name}")
return True

- 767
- 9
- 23
-
Magic, this works beautifully! Easily the best answer on SO. As mentioned in the comment, it is the perfect equivalent to `pd.DataFrame.to_sql(..., if_exists='update')`, and it even adds an index-level duplicates constraint so duplicates cannot possibly appear in the table. – Contango Oct 18 '21 at 20:40
-
-
@NicholasHansen-Feruch, I did not test it. Since the syntax is sometimes different, it is not guaranteed to work. – pedrovgp Oct 10 '22 at 12:40
-
This is great but just want to point out some things that weren't initially obvious to me. This approach assumes your dataframe has a named index that is unique, if your pandas df has the default index then you can create one with `df.set_index([col1,col2,...])`. I also had an issue where I had to wrap the first sql to find the table in a `sqlalchemy.text` but this might be version dependent. – Ken Myers Apr 24 '23 at 05:26
Here is my code for bulk insert & insert on conflict update query for postgresql from pandas dataframe:
Lets say id is unique key for both postgresql table and pandas df and you want to insert and update based on this id.
import pandas as pd
from sqlalchemy import create_engine, text
engine = create_engine(postgresql://username:pass@host:port/dbname)
query = text(f"""
INSERT INTO schema.table(name, title, id)
VALUES {','.join([str(i) for i in list(df.to_records(index=False))])}
ON CONFLICT (id)
DO UPDATE SET name= excluded.name,
title= excluded.title
""")
engine.execute(query)
Make sure that your df columns must be same order with your table.
EDIT 1:
Thanks to Gord Thompson's comment, I realized that this query won't work if there is single quote in columns. Therefore here is a fix if there is single quote in columns:
import pandas as pd
from sqlalchemy import create_engine, text
df.name = df.name.str.replace("'", "''")
df.title = df.title.str.replace("'", "''")
engine = create_engine(postgresql://username:pass@host:port/dbname)
query = text("""
INSERT INTO author(name, title, id)
VALUES %s
ON CONFLICT (id)
DO UPDATE SET name= excluded.name,
title= excluded.title
""" % ','.join([str(i) for i in list(df.to_records(index=False))]).replace('"', "'"))
engine.execute(query)

- 1,118
- 13
- 14
-
1**SQL Injection issue:** The above code will fail if either `name` or `title` contains a single quote. Example [here](https://pastebin.com/k2bRSUw5). – Gord Thompson Dec 04 '20 at 13:54
-
@GordThompson thank you for your comment. I've edited my solution above – Ekrem Gurdal Dec 05 '20 at 11:27
-
Now the code fails if either `name` or `title` contains [double quotes](https://pastebin.com/54tz2fc6). :( – Gord Thompson Dec 05 '20 at 12:27
-
Is there a version of this that uses sql parameters instead of % string interpolation? NULL values in your df will break string interpolation, as is the case here. – Jesse Downing Aug 09 '21 at 20:05
Consider this function if your DataFrame and SQL Table contain the same column names and types already. Advantages:
- Good if you have a long dataframe to insert. (Batching)
- Avoid writing long sql statement in your code.
- Fast
.
from sqlalchemy import Table
from sqlalchemy.engine.base import Engine as sql_engine
from sqlalchemy.dialects.postgresql import insert
from sqlalchemy.ext.automap import automap_base
import pandas as pd
def upsert_database(list_input: pd.DataFrame, engine: sql_engine, table: str, schema: str) -> None:
if len(list_input) == 0:
return None
flattened_input = list_input.to_dict('records')
with engine.connect() as conn:
base = automap_base()
base.prepare(engine, reflect=True, schema=schema)
target_table = Table(table, base.metadata,
autoload=True, autoload_with=engine, schema=schema)
chunks = [flattened_input[i:i + 1000] for i in range(0, len(flattened_input), 1000)]
for chunk in chunks:
stmt = insert(target_table).values(chunk)
update_dict = {c.name: c for c in stmt.excluded if not c.primary_key}
conn.execute(stmt.on_conflict_do_update(
constraint=f'{table}_pkey',
set_=update_dict)
)

- 451
- 3
- 14
-
1I want to use this, but am a bit intimidated by all of the functions that are new to me from sqlalchemy. If you ever get a chance to explain or comment this answer, I think it could be a great one for those of us who need to upsert from dataframes. – autonopy Jul 22 '21 at 18:07
If you already have a pandas dataframe you could use df.to_sql to push the data directly through SQLAlchemy
from sqlalchemy import create_engine
#create a connection from Postgre URI
cnxn = create_engine("postgresql+psycopg2://username:password@host:port/database")
#write dataframe to database
df.to_sql("my_table", con=cnxn, schema="myschema")

- 155
- 1
- 5
-
2Indeed, that is certainly a viable options, and thank you for your input! However, I am looking to upsert data - not just insert or replace a table. That's where I think sqlalchemy could be a better option. – nate Apr 22 '20 at 14:06
-
1https://stackoverflow.com/questions/25955200/sqlalchemy-performing-a-bulk-upsert-if-exists-update-else-insert-in-postgr Maybe you could use this wrapper for SqlAlchemy Insert that implements upsert using the on commit clause dynamically? – Nathan Mathews Apr 22 '20 at 15:37
-
Yes, this works only if one has a Table() sqlalchemy object. In order to do this, I first need to convert the pandas df to a Table() object. - which is the main and first thing I want to do – nate Apr 22 '20 at 18:46