82

While doing a MySQL dump is easy enough, I have a live dedicated MySQL server that I am wanting to setup replication on. To do this, I need dumps of the databases to import to my replication slave.

The issue comes when I do the dumps, MySQL goes full force at it and ties up resources to the sites that connecting to it. I am wondering if there is a way to limit the dump queries to a low priority state to which preference is given to live connections? The idea being that the load from external sites is not affected by the effort of MySQL to do a full dump...

z33k3r
  • 831
  • 1
  • 7
  • 5
  • 3
    You should select the top answer as correct. – bryan kennedy Apr 03 '16 at 15:33
  • 1
    Are you worried more about disk I/O? Or network traffic? – Rick James Sep 24 '16 at 06:05
  • If using mysqldump, see top answer below. If using MySQL Workbench, go to Advanced options, *uncheck* "lock-tables" [to not block] and *check* "compress" [less network bandwidth used, so backup goes faster]. OTOH, if you have a "replication slave", rather than dumping (as one operation) then importing (as a second operation) to replication slave, investigate other alternatives for copying directly to the slave. – ToolmakerSteve Oct 25 '19 at 08:40

6 Answers6

182

I have very large databases with tens of thousands of tables some of which have up to 5GB of data in 10's of millions of entries. (I run a popular service)... I've always had headaches when backing up these databases. Using default mysqldump it quickly spirals the server load out of control and locks up everything... affecting my users. Trying to stop the process can lead to crashed tables and lots of downtime during recovery of those tables.

I now use...

mysqldump -u USER -p --single-transaction --quick --lock-tables=false DATABASE | gzip > OUTPUT.gz

The mysqldump reference at dev.mysql.com even says...

To dump large tables, you should combine the --single-transaction option with --quick.

Says nothing about that being dependent on the database being InnoDB, mine are myISAM and this worked beautifully for me. Server load was almost completely unaffected and my service ran like a Rolex during the entire process. If you have large databases and backing them up is affecting your end user... this IS the solution. ;)

Dan
  • 2,321
  • 3
  • 27
  • 40
CA3LE
  • 2,311
  • 1
  • 15
  • 4
  • 5
    This worked well for me and halved my page load times while mysqldumps take place. – Richard Frank Dec 08 '12 at 15:11
  • 4
    as MyISAM doesn't not support transactions, I am not sure your backup would be consistent, specially if you have a big database. I would recommend it for InnoDB-only databases. – Renan May 25 '15 at 19:05
  • 1
    tested on InnoDB. You can check your engine using `SELECT TABLE_NAME, ENGINE FROM information_schema.TABLES` or `SELECT TABLE_NAME, ENGINE FROM information_schema.TABLES WHERE TABLE_SCHEMA = ''` – vladkras Sep 18 '15 at 14:54
  • This one worked perfectly for my 20+ GB database. As sad by the author is only works for the InnoDB type ;) – Anders Andersen Jun 28 '17 at 12:43
  • 1
    dev.mysql.com says, "When using this option, you should keep in mind that only InnoDB tables are dumped in a consistent state." – arlomedia Jul 19 '17 at 00:26
  • I ended up implementing this solution with my MyISAM tables, because in any case I've needed to use a database backup, I only needed to retrieve a portion of the data and consistency among the very latest data wasn't an issue. Also, I figure any inconsistencies created during the backup window would pale in comparison to any event catastrophic enough to require a full database restore. But I plan to explore a switch to InnoDB as time allows (not easy with my hosting situation, unfortunately). – arlomedia Aug 07 '17 at 00:42
  • VERY NICE!!!! Thank you. We have 0 downtime in our operations, and this allows me to dump for backups. – Andy Nov 02 '17 at 17:34
  • Just a small note: using `gzip` helps to reduce `mysqldump` CPU usage in this case. I was just debugging things and noticed that without `gzip` MySQL uses, let's say, 100% CPU, and it uses 25% CPU only, when you pipe output to `gzip`. Of course, `gzip` takes additional CPU, but, at least, you perform less load on MySQL – Ronin May 29 '20 at 22:57
53

If using InnoDB tables, use the --single-transaction and --quick options for mysqldump

Drew Clayton
  • 531
  • 4
  • 3
  • 5
    Wow - what a difference on a InnoDb table with 1M entries. Instead of blocking the whole server for 10 minutes, everything keeps running smooth with those options. Should have found this post earlier ~~~ – BurninLeo Jan 24 '12 at 08:40
  • This is correct answer if you have innodb. As by default innodb begins/commits single transaction on any query. And during dump there are many queries. – gaRex Jun 18 '12 at 06:58
  • Made a big difference for us too – Kevin Parker Mar 20 '18 at 14:10
4

Besides the already mentioned solution of using --single-transaction and --quick , I would not directly pipe the result in gzip but first dump it as a .sql file, and then gzip it. (Use && instead of | )

The dump itself will be faster so lower downtime. (for what I tested it was double as fast)

So I would go for "&& gzip" instead of "| gzip"

Important: check for free disk space first with df -h! since you will need more then piping | gzip.

mysqldump --single-transaction --quick -u user -p my_db_name > dump_name.sql && gzip dump_name.sql

-> which will also result in 1 file called dump_name.sql.gz

Julesezaar
  • 2,658
  • 1
  • 21
  • 21
0

1) first you need to see about your MySQL version. use at least 5.7 so it supports mult thread. Old versions use only 1 thread and is not a good idea at the same time using DB and doing mysqldump if you have large database.

2) Prefer to build your backup not in the same DB disc, because performanace of read/write, or maybe you need RAID 10.

3) mysqlbackup from MySQL Enterprise is better, but is paid, I dont know if it is an option to you.

4) Sometimes many tables dont need transaction, so use transaction only on tables you need

5) Transaction generally is necessary, use InnoDB format to better performanance and not use lock tables.

6) Some cases is better to do one program, so you can create your transaction only to read your tables without lock anyone, and test with some sleeps, and not to freeze your service.

-3

Use nice and gzip command to execute the command at lowest priority.

nice -n 10 ionice -c2 -n 7 mysqldump db-name | gzip > db-name.sql.gz 
khushi muhammad
  • 127
  • 1
  • 2
-3

You can prefix the mysqldump command with the following:

ionice -c3 nice -n19 mysqldump ...

Which will run it at low IO and CPU priority so should limit the impact of it.

Note, this will only delay the time between MySQL executing. The scripts themselves will still be as intensive as they were before, just with a longer break between scripts.

Community
  • 1
  • 1
  • This method did not affect the outcome of doing a mysqldump on a live server. I tried various changes to the settings of priorities and such. I'll also add that I'm dealing with a 7GB database here... – z33k3r Apr 15 '11 at 07:28
  • For the time being, we had to do a late night (Midnight-5am) DB lock and dump. A more elegant solution is still desired... – z33k3r Apr 15 '11 at 14:44
  • 13
    That will just lower the priority of the mysqldump process. The mysqldump process, however, is just querying the database, and the queries themselves runs on the mysql server with the usual priority and cause (almost) the same impact on the server. mysqldump is not the bottleneck. – adhominem Nov 25 '13 at 11:41