1

I am trying to populate a sparse table, from a set of 10 input tsv files.

Each file has primary_key, column values for one of the columns on the destination table.

However each file has a different set of primary keys for which it has values.

I wish to load these into the table using LOAD DATA IN FILE.

The first file loads fine, but the second one runs into problems when a duplicate primary key is found.

If I use REPLACE in the query, the value loaded in the first column is lost, but if I use IGNORE, the value isn't loaded in the second column.

Is there any way to do this other than hairy text manipulation to combine all the input files into a single file?

aselder
  • 71
  • 1
  • 3
  • One strategy comes to mind: Loading a file at a time into a temp table and using the INSERT..SELECT..UPDATE ON DUPLICATE KEY UPDATE query to copy the data from that table into your sparse table http://stackoverflow.com/questions/2472229/insert-into-select-from-on-duplicate-key-update – Girish Rao Feb 08 '12 at 04:28

1 Answers1

0

That is because REPLACE mode deletes the records firslty.

Agree with Girish Rao about the using temp table. Here it is an example of this solution - http://forums.mysql.com/read.php?10,362103,362423#msg-362423

Also, you can use Data Import tool (Append/Update mode) in dbForge Studio for MySQL (data import from the CSV-file has no limitations in free Express Edition).

Devart
  • 119,203
  • 23
  • 166
  • 186