6

I have a mysql database on a Amazon RDS (About 600GB of data) I need to move it back home to our local dedicated servers, but I don't know where to start. Every time I try to init a sqldump it freezes, are there a way to move it on to S3? maybe even splitting it to smaller parts before starting the download?

How would you go about migrating a 600GB mysql DB?

Robin Westerlundh
  • 834
  • 1
  • 8
  • 12
  • Try `mysqldump --single-transaction` – Italo Borssatto Nov 29 '13 at 21:19
  • I have struggled with this myself (with RDS/SQL Server) - aws really needs to provide the ability to backup and restore to S3, but as far as I know they don't – E.J. Brennan Nov 29 '13 at 21:40
  • Can you be more specific with what you mean by "it freezes?" What is "it?" Are you trying to pipe the backup directly into the local server, or save it to a local raw sql file, first? What is the sequence of commands you are using? – Michael - sqlbot Nov 29 '13 at 22:08
  • I was using mysql Workbench and the app froze, i now did a raw mysqldump and now it actually do download, but it's slow :/ Are there a way to see progress other then looking at the file-size? – Robin Westerlundh Nov 30 '13 at 00:10
  • 1
    With `mysqldump` the `--verbose` option will show you some progress information as each table is dumped. You should also use `--compress` to reduce the amount of time to transfer the data across the wire by activating the client/server compression protocol. – Michael - sqlbot Nov 30 '13 at 03:27
  • Thanks, but how will --compress affect the import time? Am pretty horrified about the upcoming restore process :D We do have a Monster of a machine (64gb ram and 1TB SSD) but import tends to be slow even locally. Andy tip for the import part? – Robin Westerlundh Nov 30 '13 at 09:32
  • 1
    If you are using mySQL 5.6 you can use replication option from RDS to your mySQL DB: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html – Guy Dec 01 '13 at 21:12

1 Answers1

-1

Did you tried to use innobackupex script? It allows to to run living database (hot backup) and tar|gzip final backup thus you can get a smaller file. Works only with file_per_table=1

If you have downtime to move database you can also try to optimize tables to reclaim some space (especially if you did a lot of deletes).

Also you can think about get rid of some data: logs, archives etc and move them later.

ravnur
  • 2,772
  • 19
  • 28