0

We have quite large (about 1TB) MySQL 5.7 DB, hosted on RDS. We want to migrate it to Aurora 5.6 - because of parallel queries (these are available only for 5.6).

It's not possible to do that by snapshot, because the version is not the same. We need to do mysqldump and then restore it.

I tried several options, but most of them always failed, because of the size of DB.

For example straight import

nohup mysqldump -h fmysql_5_7host.amazonaws.com -u user -pPass db_name | mysql -u user2 -pPAss2 -h aurora_5_6.amazonaws.com db_name

error in nohup.out :

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table

Also dump to s3 file failed

nohup mysqldump -h mysql_5_7host.amazonaws.com -u user -pPAss db_name | aws s3 cp - s3://bucket/db-dump.sql

error:

An error occurred (InvalidArgument) when calling the UploadPart operation: Part number must be an integer between 1 and 10000, inclusive

Both of previous methods worked for me on smaller DB, about 10GB, but not on 1TB.

Is there any other way how to migrate such database?

Many thanks.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
Martin Rázus
  • 4,615
  • 5
  • 30
  • 33
  • did you try a dump for every table separately? – nbk May 31 '19 at 13:39
  • @nbk the problem is probably with one table, which is about 500GB large, so I don't think this is going to help, but I can try it, thanks – Martin Rázus May 31 '19 at 13:52
  • What does "failed" mean? Did you get an error message? If yes, what is it? (the _exact_ message, not your iterpretation of it) – guest May 31 '19 at 14:44
  • sorry @guest, I updated the question with errors – Martin Rázus May 31 '19 at 14:54
  • First error looks like a timeout, possibly a NAT losing track of the connection. Are you running from an EC2 instance in the same subnet? – guest May 31 '19 at 21:22
  • Second error is because `s3 cp` does a multi-part upload with a conservative part size (5 MB, I think). It doesn't appear to provide an option to control part size, but there is `--expected-size`. I don't know if it needs to be exact or is just used to calculate the page size, so it might be worth giving it `1000000000000` (I think that's the right number of zeroes). – guest May 31 '19 at 21:25
  • Also, in the first case you could try `--opt` on the `mysqldump` command. That's supposed to be the default, so I don't know if it will have any effect. I know that the MySQL libraries provide options for setting a long timeout, but I don't see any such option on `mysqldump`. For what it's worth, though, I've successfully dumped similar-size tables (300GB) without any special options. – guest May 31 '19 at 21:29
  • You may need to use `mysqldump --max-alllowed-packet=XXXX` with a sufficiently large value (maximum 1GB). The lost connection error is usually associated with data that is too large. Like if you are dumping big BLOB/TEXT content, or very long rows. – Bill Karwin May 31 '19 at 23:44
  • See for example https://stackoverflow.com/questions/93128/mysql-error-1153-got-a-packet-bigger-than-max-allowed-packet-bytes – Bill Karwin May 31 '19 at 23:44

0 Answers0