I'm trying to update a single column in a table for many rows, but each row will have a different updated date value based on a unique where condition of two other columns. I'm reading the data from a csv, and simply updating the date column in the row located from the combination of values in the other two columns.
I've seen this
SQL update multiple rows based on multiple where conditions
but the SET value will not be static, and will need to match each row where the other two column values are true. This is because in my table, the combination of those two other columns are always unique.
Psuedocode
UPDATE mytable SET date = (many different date values)
WHERE col_1 = x and col_2 = y
col_1 and col_2 values will change for every row in the csv, as the combination of these two values are unique. I was looking into using CASE in postgres, but I understand it cannot be used with multiple columns.
So basically, a csv row has a date value, that must be updated in the record where col_1 and col_2 equals their respective values in the csv rows. If these values don't exist in the database, they are simply ignored.
Is there an elegant way to do this in a single query? This query is part of a spring batch job, so I might not be able to use native postgres syntax, but I'm struggling to even understand the format of the query so I can worry about the syntax later. Would I need multiple update statements? If so, how can I achieve that in the write step of a spring batch job?
EDIT: Adding some sample data to explain process
CSV rows:
date, col_1, col_2
2021-12-30, 'abc', 'def'
2021-05-30, 'abc', 'zzz'
2021-07-30, 'hfg', 'xxx'
I'll need my query to locate a record where col_1='abc' AND col_2=def, then change the date column to 2021-12-30. I'll need to do this for every row, but I don't know how to format the UPDATE query.