I am adding a single column to a Postgres table with 100+ columns via Django ( a new migration). How can I update a column in a PostgreSQL table with the data from a pandas
data_frame? The pseudo-code for Postgres SQL UPDATE would be:
UPDATE wide_table wt
SET wt.z = df.z
WHERE date = 'todays_date'
The reason for doing it this way is that I am computing a column in the data_frame
using a CSV that is in S3
(this is df.z
). The docs for Postgres update are straightforward to use, but I am unsure how to do this via Django, sqlalchemy, pyodbc, or the like.
I apologize if this is a bit convoluted. A small and incomplete example would be:
Wide Table (pre-update column z
)
identifier | x | y | z | date
foo | 2 | 1 | 0.0 | ...
bar | 2 | 8 | 0.0 | ...
baz | 3 | 7 | 0.0 | ...
foo | 2 | 8 | 0.0 | ...
foo | 1 | 5 | 0.0 | ...
baz | 2 | 8 | 0.0 | ...
bar | 9 | 3 | 0.0 | ...
baz | 2 | 3 | 0.0 | ...
Example Python snippet
def apply_function(identifier):
# Maps baz-> 15.0, bar-> 19.6, foo -> 10.0 for single date
df = pd.read_csv("s3_file_path/date_file_name.csv")
# Compute 'z' based on identifier and S3 csv
return z
postgres_query = "Select identifier from wide_table"
df = pd.read_sql(sql=postgres_query, con=engine)
df['z'] = df.identifier.apply(apply_function)
# Python / SQL Update Logic here to update Postgres Column
???
Wide Table (post-update column z
)
identifier | x | y | z | date
foo | 2 | 1 | 10.0 | ...
bar | 2 | 8 | 19.6 | ...
baz | 3 | 7 | 15.0 | ...
foo | 2 | 8 | 10.0 | ...
foo | 1 | 5 | 10.0 | ...
baz | 2 | 8 | 15.0 | ...
bar | 9 | 3 | 19.6 | ...
baz | 2 | 3 | 15.0 | ...
NOTE: The values in z will change daily so simply creating another table to hold these z
values is not a great solution. Also, I'd really prefer to avoid deleting all of the data and adding it back.