3

My DB got corrupt with errors like,

InnoDB log sequence number is in the future

and so went to a backup copy that I had and exported an SQL file that needs to be re-imported.

The .sql file is close to 800GB and the actual Db was around 1.3TB in size.

Now I've started the import but it's moving at a ridiculously slow speed of just 10-12GB/hour. My site will be down for 6 days before I can fully import the data and hoping for a faster way. Any other suggestions on what I could do to speed things up?

I have a 64GB RAM Centos Machine and have set innodb_buffer_pool_size to 45GB, so that's been taken care of. To note, the source .sql file is on a SATA drive and the MySQL is on SSD. I don't have enough space on the SSD to have the .sql also be on the same drive.

Possible solutions that I haven't tried because wondering whether worth stopping the import and trying it again:

  1. I haven't yet tried the option of putting SET FOREIGN_KEY_CHECKS=0, SET autocommit=0, and SET unique_checks=0
  2. Wondering if it's worth considering splitting the .sql files into multiple smaller files and running them in parallel, will such a thing work? Not sure how to split such a large file safely and is there anything that could go wrong in doing this?
Sam
  • 532
  • 6
  • 12
  • Are your OS is Linux,Windows? some params? Disk read/write? You must analyze where is the bottleneck. It can be disk IO , network speed (if you importing remotely), also one of other mysql parameter(s).. – FeHora Jun 17 '19 at 14:21
  • 1
    Are there foreign key relationships? A lot of indexes? Anything else that might slow down a single insert? Often times it's a good idea to drop indexes and `SET FOREIGN_KEY_CHECKS=0`, `SET autocommit=0`, and `SET unique_checks=0` so that the DB can concentrate on just data ingestion. If you set `autocommit=0` just make sure to add a `COMMIT;` at the end of your sql dump. – JNevill Jun 17 '19 at 14:22
  • It's CentOS. It's being imported from an SQL file on a SATA disk to the MySQL which is on an SSD. There are no foreign keys but there are plenty of indexes. Not sure how to edit this super large sql file to drop indexes and add them later, it took really long to generate this SQL file and wouldn't want to repeat that process. – Sam Jun 17 '19 at 15:18
  • 1
    Already checked this SO question? https://serverfault.com/questions/118504/how-to-improve-mysql-insert-and-update-performance Maybe this will help to see how to tune the performance – markvdlaan93 Jun 17 '19 at 15:38
  • 1
    I assume the .sql file has been created by mysqldump. If so, the file will have one section for each table. So you can split the file before each CREATE TABLE statement, and load each part in parallel. If you remove the KEY definitions, but not PRIMARY KEY, from each CREATE TABLE statement, indexes will not be created. – Øystein Grøvlen Jun 17 '19 at 15:57

0 Answers0