"Replacing" a table: If you are really "replacing" the entire table, this is much better.
For example, are you getting a replacement for your table every night? If so there is a much less invasive way to do such. (Not faster, but I suspect that is not the real issue.) The following keeps the real
table "online" except for a very brief, ACID, RENAME
.
CREATE TABLE new LIKE real;
load up `new` by whatever means
RENAME TABLE real TO old,
new TO real;
DROP TABLE old;
LOAD DATA INFILE (applies for either adding or replacing)
If the "flat file" is comma- or tab-separated, then see if LOAD DATA
can be used. It is very fast.
Batch loading
- I recommend 100-1000 rows per batch. (100K rows at a shot may so big as to be leading to other slowdowns. This is especially bad if replication is involved.)
- commit after each batch. (Or use
autocommit=ON
.) (Saving 3.2M rows in the undo buffer is costly.)
Other notes
Rethink whether you really need two unique indexes (the PK is one). That doubles some of the work of inserting each row.