I have two tables. One of them is a temporary table in which I copy the data from a big CSV file. After that I update my other table with the temporary table (see this answer: Copy a few of the columns of a csv file into a table).
When I update my temporary table once more with a (updated) CSV file (data from a grep in bash, increasing row numbers per update) I want to delete the rows that are not affected by the update. I could have a temp table smaller than a temp table with all the data.
First: Is it better drop all data in the temp table and to fill it with the whole updated CSV data and after that to update/insert the other table. Second: Or to update the temp table in the first place?
So it is a matter of size of the tables. I talking about 500k rows (with geometry columns).
An example:
table
1, NULL
2, NULL
temp table
1, hello
2, good morning
CSV
1, hello there
2, good morning
3, good evening
temp table
1, hello there
2, good morning
3, good evening
OR
temp table
1, hello there
3, good evening
So my question is how to update a table with a CSV file, insert new rows, update the old rows and delete the rows that were not affected by the update.