I have a pandas DataFrame that consists of multiple columns that I want to store into the postgreSQL database, using .to_sql():
my_table.to_sql('table', con=engine, schema='wrhouse', if_exists='append', index=False)
I have set a primary key (date), in order to avoid duplicate entries. So above-mentioned command works when my primary key does not exist in the database.
However, if that key exists I am getting the following error:
IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "table_pkey"
DETAIL: Key (date)=(2022-07-01 00:00:00) already exists.
Now, what I would like to do is:
- Update the row with the already existed Key(date)
- Insert a new row in case the Key(date) does not exist
I checked the documentation on: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html but I could't find any option by using the DataFrame.to_sql() function.
Additionally, if I change the if_exists='append'
parameter to if_exists='replace'
, it deletes the whole table and that is not what I want.
Is there any way to update/insert rows using the .to_sql() function?