AWS documentation has this page that talks about importing and exporting data from MySQL server, but it's mostly about import. The only thing I see in their documentation is a way to export 5.6 data using replication, which is documented here. I was wondering if there is a simpler way to export data using mysqldump and load in local database. The database that I want to export is not huge, may be 1GB, so size is not a issue.
4 Answers
Sure.
Take the dump from the remote RDS Server:
mysqldump -h rds.host.name -u remote_user_name -p remote_db > dump.sql
When prompted for password, provide the password for user=remote_user_name (remote server)
Upload it to your local mySql instance:
mysql -u local_user_name -p local_db < dump.sql
Also, if you own an ec2
server in the same region, I'd suggest take a dump there. zip the file and then scp
it to your local machine. Typically, the compressed version of the file would be much smaller and you'd be able to transfer it quicker.

- 14,085
- 12
- 59
- 90
-
21Or, add the `--compress` option to `mysqldump` for faster transfer from a remote server. The resulting output files isn't compressed, but over the wire you can see a significant bandwidth savings and faster transfer. – Michael - sqlbot May 13 '15 at 21:52
-
1But you may have to add some more flags to compensate, like -K so you don't get foreign key issues on import – nafg Jul 29 '15 at 00:42
-
1If you can't connect, you may need to add your IP to the security groups. It's in EC2 dashboard > Network and Security > Security Groups – homebrand Mar 25 '17 at 00:48
-
Yes. https://stackoverflow.com/a/2987371/174184 – TJ- Dec 08 '21 at 11:24
To export db from RDS
mysqldump -h rds.host.name -u remote_user_name -p remote_db > remote_db.sql
When prompted for password, provide the password
To import db on RDS
mysql -h rds.host.name -u remote_user_name -p remote_db < remote_db.sql
When prompted for password, provide the password

- 2,207
- 3
- 26
- 31
Another very easy option is by using the MySql Workbench. In the toolbar select 'Database' and 'Data export'. Select the right options, the target file ... and you're done! Easy does it!

- 7,307
- 17
- 79
- 168
-
Unfortunately this can be incredibly slow with even moderately-sized data sets. – JohnW Apr 14 '21 at 01:52
This is an old topic, but in case you are getting the following error:
Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=O FF. To make a complete dump, pass --all-databases --triggers --routines --events.
mysqldump: Couldn't execute 'SELECT COLUMN_NAME, JSON_EXTRACT(HISTOGRAM, '$."number-of-buck ets-specified"') FROM information_schema.COLUMN_STATISTICS WHERE SCHEMA_NAME = 'db_name' AND TABLE_NAME = 'wp_actionscheduler_actions';': Unknown table 'COLUMN_STATISTICS' in information_schema
Simply, use the following command:
mysqldump -uUSER -p DATABASE --host=WriterEndpointNOport --set-gtid-purged=OFF --column-statistics=0 > bd_export.sql

- 2,324
- 26
- 22
- 31

- 41
- 1