0

I am using postgresql where on daily basis I have to UPDATE the values of selective columns. My source is in pandas df. Since there are more than 1000 rows in df, I am using df.to_sql() method by sqlalchemy.

engine = sqlalchemy.create_engine(...)
df.to_sql(mytable,con=engine,if_exists='replace')

But there is a problem I am facing. Every time I use the above code snippets, it actually DROP the existing table in DB, then CREATE a new table with the column names as given in df. Which means the column names are getting changed!! Besides, total number of columns in df is less than that of in mytable in DB.

I have gone through this SO thread but I am not kind of looking this sort of solution.

Is there any other way to tackle this situation? Or is there any way to UPSERT row wise values into selective columns of database table? (like for row,index in df.iterrows():)

pythondumb
  • 1,187
  • 1
  • 15
  • 30
  • you are passing `if_exists='replace'` to the `to_sql()` method, that's why it create a new table for you. What behavior do you want to achieve? – Louis Ng Sep 29 '20 at 03:33
  • @Louis Ng: I know that `if_exists=replace` shall exhibit the behaviour you are referring. i want the table in db to remain intact. Only the selected fields in that table should get updated with values from the dataframe. Column names should not change. – pythondumb Sep 29 '20 at 03:39

0 Answers0